A model is meant to be a simplified representation of reality created for a specific purpose. We often try to build models to understand the flow and interactions between different parts of a system or predict certain outcomes. Think of a map used to navigate from point A to point B. Although it may not include every street, it shows some of the main paths to follow to get to point B, which is what we need.

Models need to rely on assumptions that don’t always apply perfectly to reality. This frustrates many of us, and even makes us wonder why we use models if they rely on faulty assumptions.

However, the quality of a model doesn’t depend as much on the accuracy of the assumptions as on its performance, or the ability to produce results that reflect reality as closely as possible in one way or another. Models that are systematically wrong are good models, too. If they fail in predictable ways, all they need is some calibration to match actual outcomes.

Just as important as a model’s performance is its degree of complexity. I have heard this over and over again from seasoned practitioners: The simpler the model, the better.

Because complex models often include a large number of complicated parameters and assumptions, they are much harder to explain to non-technical audience members, many of whom may be decisionmakers who are distrustful of models that produce non-intuitive results. Complex models are also hard to correct, due to the large number of interactions that must first be analyzed. For these reasons, many practitioners opt to give up some performance to make their models simpler to understand.

We love models because they help us make informed decisions by reducing the amount of information that is just noise and clutters our thinking. In the race to leverage big data, the companies that adopt simple, relevant models are the ones that will come out on top.