Better forecasting can, of course, help address many business problems. We want to believe that more accurate forecasts are always possible. “If only,” management bemoans, “if only we had bigger computers, more sophisticated software, more skilled forecast analysts – or if the analysts we have just worked harder!”
Unfortunately, there are limits to the accuracy we can ever expect to achieve. One limiting factor is the degree of randomness in the behavior we are trying to forecast. But how does one explain this to management? Perhaps by challenging them to The Contest.
Consider three processes to be forecast:
P10: % heads in the tossing of 10 fair coins
P100: % heads in the tossing of 100 fair coins
P1000: % heads in the tossing of 1000 fair coins.
Every day, the three processes will be executed: The coins will be tossed, and we have to predict the percentage of heads. What is our forecasted percentage of heads each day for each process? Can we forecast one process better than the others? What accuracy will we achieve? Are there any investments we can make (better software, bigger computer, more elaborate forecasting process, more skilled statistical analyst) to improve our accuracy?
This isn’t meant to be brain surgery or rocket science – it isn’t a trick question. The only rational forecast each day for each process is 50% heads. So which process can we forecast most accurately, and why? These charts illustrate 100 daily trials of each of these processes:
Since we are dealing with the independent tossing of fair coins, then, by definition, each process behaves according to the same underlying structure or rule—that over a large number of trials, each process will average about 50% heads. We fully understand the nature of each process, and we realize it makes no sense to forecast anything other than 50% heads each day for each process. However, as illustrated in the charts, the variation in the percentage of heads in each process is vastly different, as is the accuracy of our forecasts.
When there is a lot of randomness, or noise, in the behavior, we cannot expect to forecast it very accurately. Even when we know everything there is to know about the rules guiding the behavior, as we do here, the amount of randomness limits how accurate we can ever be. Also, in situations like these, any additional investment in the forecasting systems or process would be a waste. There is nothing we could ever do to forecast P10 more accurately than P100, or P100 more accurately than P1000. The nature of each process, its underlying structure along with its random variability, determined the level of accuracy we were able to achieve.
The coin tossing contest illustrates that there are limits to the forecast accuracy we can achieve. We can’t assume that by applying more data, bigger computers, and more sophisticated software, or by exhorting our forecasters to work harder, we can always achieve the level of accuracy desired. It is important to understand the limits of forecast accuracy, and to understand what level of accuracy is reasonable to expect for a given demand pattern. The danger is that if you do not know what accuracy is reasonable to expect, you can reward inferior performance, or you can waste resources pursuing unrealistic or impossible accuracy objectives. You can also miss opportunities for alternative (non-forecasting) solutions to your business problems.
Announcement: Forecasting 101 Webinar (Wednesday July 21, 1pm ET)
Please join me on Wednesday July 21 for this month's installmant of the SAS Applying Business Analytics Webinar Series. After a quick summary of business analytics in general, Forecasting 101 will cover some of the basic tenets of forecasting, as well as forecast value added analysis. If unable to attend the live event, the webinar will be recorded and available for free on-demand replay.