The "avoidability" of forecast error (Part 1)

9

"Forecastability" is a frequent topic of discussion on The BFD, and an essential consideration when evaluating the effectiveness of any forecasting process. A major critique of forecasting benchmarks is that they fail to take forecastability into consideration: An organization with "best in class" forecast accuracy may do so only because they have the easiest to forecast demand -- not because their forecasting methods are particularly admirable.

Thus, the underlying forecastability has to be considered in any kind of comparison of forecasting performance.

Along with the general forecastability discussion is the question "What is the best my forecasts can be?" Can we achieve 100% forecast accuracy (0% error), or is there some theoretical or practical limit?

It is generally acknowledged that, at the other extreme, the worst your forecasts should be is the error of the naive forecast (i.e., using a random walk as your forecasting method). You can achieve the error of the naive forecast with no investment in big computers or fancy software, or any forecasting staff or process at all. So the fundamental objective of any forecasting process is simply "Do no worse than the naive model."

"What is the best my forecasts can be?" is difficult, and perhaps impossible to answer. But a compelling new approach on the "avoidability" of forecast error is presented by Steve Morlidge in the Summer 2013 issue of Foresight: The International Journal of Applied Forecasting.

How Good Is a "Good" Forecast?

Steve Morlidge

Steve Morlidge is co-author (with Steve Player) of the excellent book Future Ready: How to Master Business Forecasting (Wiley, 2010). After many years designing and running performance management systems at Unilever, Steve founded Satori Partners in the UK.

In his article, Steve examines the current state of thought on forecastability. He considers approaches using volatility (Coefficient of Variation), Theil's U statistic, Relative Absolute Error, Mean Absolute Scaled ErrorFVA, and "product DNA" (an approach suggested by Sean Schubert in the Summer 2012 issue of Foresight).

Steve starts with an assertion that "the performance of any system that we might want to forecast will always contain noise." That is, outside the underlying pattern or rule or signal guiding the behavior, there is some level of randomness. So even if we know the rule guiding the behavior, we model the rule perfectly in our forecasting algorithm, and that rule doesn't change in the future, we will still have some amount of forecast error determined by the level of randomness (noise). Such error is "unavoidable."

Errors from the naive forecast are one way of measuring the amount of noise in data. From this, Steve makes the conjecture that "there is a mathematical relationship between these naive forecast errors and the lowest possible errors from a forecast."

We'll see where this conjecture leads in Part 2.

 

 

Share

About Author

Mike Gilliland

Product Marketing Manager

Michael Gilliland is a longtime business forecasting practitioner and currently Product Marketing Manager for SAS Forecasting. He is on the Board of Directors of the International Institute of Forecasters, and is Associate Editor of their practitioner journal Foresight: The International Journal of Applied Forecasting. Mike is author of The Business Forecasting Deal (Wiley, 2010) and editor of the free e-book Forecasting with SAS: Special Collection (SAS Press, 2020). He is principal editor of Business Forecasting: Practical Problems and Solutions (Wiley, 2015) and Business Forecasting: The Emerging Role of Artificial Intelligence and Machine Learning (Wiley, 2021). In 2017 Mike receive the Institute of Business Forecasting's Lifetime Achievement Award. In 2021 his paper "FVA: A Reality Check on Forecasting Practices" was inducted into the Foresight Hall of Fame. Mike initiated The Business Forecasting Deal blog in 2009 to help expose the seamy underbelly of forecasting practice, and to provide practical solutions to its most vexing problems.

Back to Top