Why forecasts are wrong


This week brought big news of one of the most cruel and heartless tyrants of the 21st century.  This man is known for narcissistic behavior, surrounding himself with a cadre of beautiful women, sleeping in a different place every night, picking new favorites each week, and bringing tears and untold suffering to those who didn’t receive his approval. Of course, I’m referring to Simon Cowell (the man perhaps best known for his smug demeanor, oversized ego, and undersized t-shirts).

The big news was that Simon allowed Melanie Amaro, an 18-year old student from Sunrise, Florida, to become the Top 17th X Factor USA contestant for the Live Shows next Tuesday.

Why Forecasts Are Wrong

Forecasts never seem to be as accurate as we would like them to be – or need them to be. As a result, we are tempted to throw money at the problem in hopes of making it go away. While there are plenty of consultants and software vendors willing to take that money in exchange for lots of promises, are these promises ever fulfilled?

How many organizations are you aware of – perhaps even your own – that have thrown thousands or even millions of dollars at the forecasting problem, only to end up with the same lousy forecasts?

The question boils down to: Why do forecasts always seem to be so wrong…and sometimes so terribly wrong?

The Four Reasons

There are at least four types of reasons why our forecasts are not as accurate as we would like them to be.

• Unsuitable software – software that doesn’t have the necessary capabilities, has mathematical errors, or uses inappropriate methods. It is also possible that the software is perfectly sound but due to untrained or inexperienced forecasters, it is misused.

• The second reason is when untrained, unskilled, or inexperienced forecasters exhibit behaviors that affect forecast accuracy. One example is over-adjustment, or as W. Edwards Deming put it, “fiddling” with the process. This happens when a forecaster constantly adjusts the forecast based on new information. Research suggests that much of this fiddling makes no improvement in forecast accuracy and is simply wasted effort.*

• Forecasting should be a dispassionate and scientific exercise seeking a “best guess” at what is really going to happen in the future. The third reason for forecasting inaccuracy is process contamination by the biases, personal agendas, and ill-intentions of forecasting participants. Instead of presenting an unbiased best guess at what is going to happen, the forecast comes to represent what management wants to see happen – no matter what the marketplace is saying.

• Finally, bad forecasting can occur because the desired level of accuracy is unachievable for the behavior being forecast. Consider calling heads or tails in the tossing of a fair coin. It doesn’t matter that we may want to achieve 60, 70 or 90 percent accuracy. The reality is that over a large number of tosses, we will only be right half of the time and nothing can change that. The nature of the behavior determines how well we can forecast it – and this applies to demand for products and services just as it does to tossing coins.

The next series of The BFD posts will explore these reasons why forecasting is so often so poorly done.


*See "Good and Bad Judgment in Forecating: Lessons from Four Companies" by Fildes and Goodman, published in Foresight: The International Journal of Applied Forecasting, Issue 8, Fall 2007, pp. 5-10.  A more detailed and "academic" version of this research was published (along with several useful comments and replies) in "Effective forecasting and judgmental adjustments: an empirical evaluation and strategies for improvement in supply-chain planning" by Fildes, Goodwin, Lawrence, and Nikolopoulos, published in International Journal of Forecasting 25 (2009) 3-23.


About Author

Mike Gilliland

Product Marketing Manager

Michael Gilliland is a longtime business forecasting practitioner and formerly a Product Marketing Manager for SAS Forecasting. He is on the Board of Directors of the International Institute of Forecasters, and is Associate Editor of their practitioner journal Foresight: The International Journal of Applied Forecasting. Mike is author of The Business Forecasting Deal (Wiley, 2010) and former editor of the free e-book Forecasting with SAS: Special Collection (SAS Press, 2020). He is principal editor of Business Forecasting: Practical Problems and Solutions (Wiley, 2015) and Business Forecasting: The Emerging Role of Artificial Intelligence and Machine Learning (Wiley, 2021). In 2017 Mike received the Institute of Business Forecasting's Lifetime Achievement Award. In 2021 his paper "FVA: A Reality Check on Forecasting Practices" was inducted into the Foresight Hall of Fame. Mike initiated The Business Forecasting Deal blog in 2009 to help expose the seamy underbelly of forecasting practice, and to provide practical solutions to its most vexing problems.


  1. Are there are a couple of other factors at play here i.e. lack of domain knowledge, and the impact of 'unknown unknowns'. Taking the latter as an example, nobody predicted the depth of the recession and that will have impacted on forecasts in all sorts of commercial organisations.

Back to Top