Uncertain predictions

In 2009, a researcher surveyed 35 quantitative analysts who build forecast models across a variety of industries and government. Out of those, “just one respondent stated he had ever attempted to check actual outcomes against original forecasts.” In other words, almost no one actually checked to see whether their predictions came true. Even the one respondent who did say he checked didn’t keep any hard data about the level of accuracy.

The inevitable consequence is that “fundamentally flawed models that don’t even come close to matching the eventual observations may be used without question indefinitely.”

It’s easy to imagine why model builders may not proactively want to check their past predictions. For one thing, if the results are poor enough, it could make it hard to justify hiring the model builder in the future.

But why don’t businesspeople who use the model forecasts in their decision-making specifically request such retrospective analysis?

It’s possible that some model users do that tracking themselves, so it didn’t show up in the survey of model builders described above.

But I also suspect it has a lot to do with the discomfort we feel around uncertainty. Mainstream business culture is oriented around the concepts of “measure, predict, control”. So the less accurately it turns out we can predict, the more that entire world view is undermined. If you are committed to the world view, if indeed your entire career and belief system is predicated on the world view, you will tend to do whatever you can to avoid potential disconfirming evidence.

Fortunately, next-stage organizations are beginning to appear, with an entirely different world view characterized by “sense and respond”. Prediction is far less critical, and uncertainty is far less problematic. This indeed makes quantitative models themselves less important (but by no means irrelevant). In situations where these models continue to be used, I hope to see more acceptance not just of retrospective analysis, but of other indicators of uncertainty such as confidence intervals and statistical significance.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.