The "avoidability" of forecast error (Part 3)

2

Suppose we have a perfect forecasting algorithm. This means that we know the "rule" guiding the behavior we are forecasting (i.e., we know the signal), and we have properly expressed the rule in our forecasting algorithm. As long as the rule governing the behavior doesn't change in the future, then any error in our forecasts is due solely to noise (i.e., randomness in the behavior about its rule).

So the perfect forecasting algorithm doesn't guarantee us perfect forecasts (with 0% error). But the perfect algorithm does give us errors as low as we can ever expect them to be, and the error we observe is "unavoidable."

Of course, in real life business forecasting we probably don't know the rule governing the behavior, and we have no assurance that the behavior isn't changing over time. So the forecasting models we use are probably imperfect. But how close are they to the best they can be?

How Good is a "Good" Forecast?
Steve Morlidge

We ended last time with Steve Morlidge's "Unavoidability Ratio," which states that (under specified circumstances) the MSE of a perfect algorithm is one-half the MSE of a naive forecast (a random walk). So under the assumption of no trend or cyclical pattern to the historical data, and no impact from causal variables, about the best we can expect to do is to cut the MSE in half.

The assumption of no trend or cyclical pattern, and no causal impacts, sounds implausible. But Steve argues that there are many real-life situations where they can apply. For example, supply chain forecasts are often made at a very granular level (e.g. item/location) in weekly buckets. As long as changes in the signal are low relative to the level of noise (which they probably will be in this scenario), the theoretical limit of forecasting performance should stay close to the o.5 ratio.

In another situation, with complex signal patterns, these are liable to be more difficult to forecast than with a simple signal. Thus, "the theoretical possiblity of improving performance would be offset by the practical difficulty of achieving it." So from a practical point of view, the proposed unavoidability ratio would still seem to make sense.

To summarize,  the upper bound (worst case forecast error) is defined by the naive forecast, in which case the ratio of the MSE of the method (naive forecast) to the MSE of the naive forecast is 1.0. The lower bound (perfect algorithm) will have a MSE of one half the MSE of the naive forecast. Thus, a rational forecast process will normally produce a ratio of between 0.5 (perfect algorithm) and 1.0 (naive forecast). The next step is to test this approach with real life data -- and I promise we'll do this in the next installment!

Share

About Author

Mike Gilliland

Product Marketing Manager

Michael Gilliland is a longtime business forecasting practitioner and formerly a Product Marketing Manager for SAS Forecasting. He is on the Board of Directors of the International Institute of Forecasters, and is Associate Editor of their practitioner journal Foresight: The International Journal of Applied Forecasting. Mike is author of The Business Forecasting Deal (Wiley, 2010) and former editor of the free e-book Forecasting with SAS: Special Collection (SAS Press, 2020). He is principal editor of Business Forecasting: Practical Problems and Solutions (Wiley, 2015) and Business Forecasting: The Emerging Role of Artificial Intelligence and Machine Learning (Wiley, 2021). In 2017 Mike received the Institute of Business Forecasting's Lifetime Achievement Award. In 2021 his paper "FVA: A Reality Check on Forecasting Practices" was inducted into the Foresight Hall of Fame. Mike initiated The Business Forecasting Deal blog in 2009 to help expose the seamy underbelly of forecasting practice, and to provide practical solutions to its most vexing problems.

Back to Top