Why forecasts are wrong: Inadequate/unsound/misused software

1

A common mistake in bad or misused software is choosing a forecasting model based solely on the model’s “fit to history” (often referred to as “best fit” or “pick best” functionality). The software provides (or the forecaster builds) several competing models which are then evaluated against recent history. The model that has the best fit to history is selected to create forecasts of the future.

While "fit to history" is a relevant consideration in forecasting model selection, it should not be the only consideration -- as we see in this example:


The history consists of four weeks of actual sales: 5, 6, 4 and 7 units. You can see these as the four dots in each graph. Let us consider four models for forecasting future sales:

  • Model 1 is the average of the four weeks of history and forecasts 5.5 units for Week 7. Model fit over the four points of history has a Mean Absolute Percent Error (or MAPE), of 18%.
  • Model 2 is a least-squares regression line that shows an upward trend, and forecasts 7.2 units for Week 7. It has a fit error of 15% over the four weeks of history.
  • Model 3 is a quadratic equation with a fit error of only 8%, and it forecasts 16.5 units in Week 7.
  • Model 4 is a cubic equation that fits the history perfectly (fit error of 0%). It forecasts 125 units in Week 7.

Hitting the "pick best" button would select Model 4, sending signals to your supply chain to start cranking up production.  But does this make any sense?

Remember, the objective is not to fit a model to history – it is to find an appropriate model for forecasting future weekly sales.

It so happens that fitting a model to history is easy. A MAPE of zero can be obtained in the fitting phase by using a polynomial of sufficiently high order. (With 4 data points, that would be a cubic equation.) Any forecasting software should be able to do this. And when that software is demonstrated to you by the vendor, most likely what you will be shown is the MAPE of the historical fit.  (Note to self: So that's why the MAPE in the demo is always 1%, yet when I start doing actual forecasting, the MAPE is more like 40%!?!?!?)

Having perfect, or even good fit to history is no guarantee that the model will generate accurate forecasts.  There is often little relationship between historical fit and the accuracy of future forecasts -- other than that (nearly invariably), forecast accuracy will be worse-to-much worse than historical fit.

In this example, bad software (or misguided forecasters) using fit to history as the sole criterion for selecting the forecasting model would have chosen Model 4, and generated forecasts that appear to be entirely unreasonable.  In this case the simplest and worst fitting models, such as the average or trend line, would probably be the most appropriate forecasts given the very limited information we have.

Tags
Share

About Author

Mike Gilliland

Product Marketing Manager

Michael Gilliland is a longtime business forecasting practitioner and formerly a Product Marketing Manager for SAS Forecasting. He is on the Board of Directors of the International Institute of Forecasters, and is Associate Editor of their practitioner journal Foresight: The International Journal of Applied Forecasting. Mike is author of The Business Forecasting Deal (Wiley, 2010) and former editor of the free e-book Forecasting with SAS: Special Collection (SAS Press, 2020). He is principal editor of Business Forecasting: Practical Problems and Solutions (Wiley, 2015) and Business Forecasting: The Emerging Role of Artificial Intelligence and Machine Learning (Wiley, 2021). In 2017 Mike received the Institute of Business Forecasting's Lifetime Achievement Award. In 2021 his paper "FVA: A Reality Check on Forecasting Practices" was inducted into the Foresight Hall of Fame. Mike initiated The Business Forecasting Deal blog in 2009 to help expose the seamy underbelly of forecasting practice, and to provide practical solutions to its most vexing problems.

Back to Top