Changing the paradigm for business forecasting (Part 7 of 12)

0

The Means of the Defensive Paradigm

The Defensive paradigm pursues its objective by identifying and eliminating forecasting process waste. (Waste is defined as efforts that are failing to make the forecast more accurate and less biased, or are even making the forecast worse.)

In this context, it may seem ridiculous to be talking so much about naïve models. How difficult can it be to forecast better than doing nothing and just using the last observation as your forecast? When it comes to real-life business forecasting, this turns out to be surprisingly difficult!

The Green and Armstrong study affirmed what has long been recognized, that simple models can perform well. Of course, this doesn’t mean that a simple model will necessarily give you a highly accurate forecast. Some behaviors are highly erratic and essentially unforecastable, and no method will deliver highly accurate forecasts in those situations.

But at least the simple methods tend to forecast better than the complex ones.

The 52%

In a series of rather disturbing articles published in Foresight since 2013*, Steve Morlidge has painted a grim portrait of the state of real-life business forecasting. He studied 8 supply chain companies, encompassing 300,000 real-life forecasts that these companies were actually using to run their businesses. Morlidge found that a shocking 52% were less accurate than the no-change forecast!

How could this be?

You’d expect, just by chance, to sometimes forecast worse than doing nothing. But these companies were predominantly forecasting worse than doing nothing. Thankfully, Morlidge has not only exposed this problem, but guides us toward a way of dealing with it in his Foresight articles. (See in particular Morlidge (2016) in the footnotes below.)

Forecast Value Added

The Defensive paradigm aligns very well with exposing and weeding out of bad practices. It also aligns very well with one of the tools we can use to identify harmful practices, Forecast Value Added, or FVA analysis. Let’s take a few moments to understand the FVA approach.

Forecast Value Added is defined as:

The change in a forecasting performance metric that can be attributed to a particular step or participant in the forecasting process.

It is measured by comparing the results of a process activity to the results you would have achieved without doing the activity. So FVA can be positive or negative.

Relative Error Metrics

FVA is in the class of so-called “relative error” metrics, because it involves making comparisons.  A couple of others are:

  • Theil’s U, proposed over 50 years ago, can be characterized as the Root Mean Squared Error (RMSE) of your forecasting model, divided by the RMSE of the no-change model.

The interpretation is that:

  • The closer U is to zero, the better the model.
  • When U < 1, your model is adding value by forecasting better than the no-change model.
  • When U > 1, it means the model forecasts worse than doing nothing and just using the no-change model.

Another metric is:

  • Relative Absolute Error (RAE), which compares the absolute forecast error of a model to the absolute error that would have been achieved with a no-change model.

Interpretation of the RAE is similar to interpreting Theil’s U:

  • RAE closer to zero is better.
  • When RAE < 1, this corresponds to positive value added -- you are forecasting better than doing nothing.
  • However when RAE > 1, this means negative FVA, you are just making the forecast worse.

As a sidenote, Morlidge and Goodwin concluded that an RAE of 0.5 may be about the lowest forecast error you can ever expect to achieve. So best case performance is roughly cutting the error of the naïve forecast in half.

Morlidge has coined the term “avoidable error” as any error in excess of 0.5 RAE. Find more discussion in his Foresight articles.


*Morlidge, S. (2013). How Good is a "Good" Forecast? Forecast errors and their avoidability. Foresight 30 (Summer 2013), 5-11.

Morlidge, S. (2014a). Do Forecasting Methods Reduce Avoidable Error? Evidence from Forecasting Competitions. Foresight 32 (Winter 2014), 34-39.

Morlidge, S. (2014b). Forecastability and Forecast Quality in the Supply Chain. Foresight 33 (Spring 2014), 26-31.

Morlidge, S. (2014c). Using Relative Error Metrics to Improve Forecast Quality in the Supply Chain. Foresight 34 (Summer 2014), 39-46.

Morlidge, S. (2015a). Measuring the Quality of Intermittent Demand Forecasts. Foresight 37 (Spring 2015), 37-42.

Morlidge, S. (2015b). A Better Way to Assess the Quality of Demand Forecasts: It's Worse than We've Thought! Foresight 38 (Summer 2015), 15-20.

Morlidge, S. (2016). Using Error Analysis to Improve Forecast Performance. Foresight 41 (Spring 2016), 37-44.

[See all 12 posts in the business forecasting paradigms series.]

Share

About Author

Mike Gilliland

Product Marketing Manager

Michael Gilliland is a longtime business forecasting practitioner and formerly a Product Marketing Manager for SAS Forecasting. He is on the Board of Directors of the International Institute of Forecasters, and is Associate Editor of their practitioner journal Foresight: The International Journal of Applied Forecasting. Mike is author of The Business Forecasting Deal (Wiley, 2010) and former editor of the free e-book Forecasting with SAS: Special Collection (SAS Press, 2020). He is principal editor of Business Forecasting: Practical Problems and Solutions (Wiley, 2015) and Business Forecasting: The Emerging Role of Artificial Intelligence and Machine Learning (Wiley, 2021). In 2017 Mike received the Institute of Business Forecasting's Lifetime Achievement Award. In 2021 his paper "FVA: A Reality Check on Forecasting Practices" was inducted into the Foresight Hall of Fame. Mike initiated The Business Forecasting Deal blog in 2009 to help expose the seamy underbelly of forecasting practice, and to provide practical solutions to its most vexing problems.

Comments are closed.

Back to Top