The "avoidability" of forecast error (Part 1)

9

"Forecastability" is a frequent topic of discussion on The BFD, and an essential consideration when evaluating the effectiveness of any forecasting process. A major critique of forecasting benchmarks is that they fail to take forecastability into consideration: An organization with "best in class" forecast accuracy may do so only because they have the easiest to forecast demand -- not because their forecasting methods are particularly admirable.

Thus, the underlying forecastability has to be considered in any kind of comparison of forecasting performance.

Along with the general forecastability discussion is the question "What is the best my forecasts can be?" Can we achieve 100% forecast accuracy (0% error), or is there some theoretical or practical limit?

It is generally acknowledged that, at the other extreme, the worst your forecasts should be is the error of the naive forecast (i.e., using a random walk as your forecasting method). You can achieve the error of the naive forecast with no investment in big computers or fancy software, or any forecasting staff or process at all. So the fundamental objective of any forecasting process is simply "Do no worse than the naive model."

"What is the best my forecasts can be?" is difficult, and perhaps impossible to answer. But a compelling new approach on the "avoidability" of forecast error is presented by Steve Morlidge in the Summer 2013 issue of Foresight: The International Journal of Applied Forecasting.

How Good Is a "Good" Forecast?

Steve Morlidge

Steve Morlidge is co-author (with Steve Player) of the excellent book Future Ready: How to Master Business Forecasting (Wiley, 2010). After many years designing and running performance management systems at Unilever, Steve founded Satori Partners in the UK.

In his article, Steve examines the current state of thought on forecastability. He considers approaches using volatility (Coefficient of Variation), Theil's U statistic, Relative Absolute Error, Mean Absolute Scaled ErrorFVA, and "product DNA" (an approach suggested by Sean Schubert in the Summer 2012 issue of Foresight).

Steve starts with an assertion that "the performance of any system that we might want to forecast will always contain noise." That is, outside the underlying pattern or rule or signal guiding the behavior, there is some level of randomness. So even if we know the rule guiding the behavior, we model the rule perfectly in our forecasting algorithm, and that rule doesn't change in the future, we will still have some amount of forecast error determined by the level of randomness (noise). Such error is "unavoidable."

Errors from the naive forecast are one way of measuring the amount of noise in data. From this, Steve makes the conjecture that "there is a mathematical relationship between these naive forecast errors and the lowest possible errors from a forecast."

We'll see where this conjecture leads in Part 2.

 

 

Share

About Author

Mike Gilliland

Product Marketing Manager

Michael Gilliland is author of The Business Forecasting Deal (the book), and editor of Business Forecasting: Practical Problems and Solutions. He is a longtime business forecasting practitioner, and currently Product Marketing Manager for SAS Forecasting software. Mike serves on the Board of Directors for the International Institute of Forecasters, and received the 2017 Lifetime Achievement in Business Forecast award from the Institute of Business Forecasting. He initiated The Business Forecasting Deal (the blog) to help expose the seamy underbelly of forecasting practice, and to provide practical solutions to its most vexing problems.

Back to Top