A common mistake in bad or misused software is choosing a forecasting model based solely on the model’s “fit to history” (often referred to as “best fit” or “pick best” functionality). The software provides (or the forecaster builds) several competing models which are then evaluated against recent history. The model that has the best fit to history is selected to create forecasts of the future.
While "fit to history" is a relevant consideration in forecasting model selection, it should not be the only consideration -- as we see in this example:
- Model 1 is the average of the four weeks of history and forecasts 5.5 units for Week 7. Model fit over the four points of history has a Mean Absolute Percent Error (or MAPE), of 18%.
- Model 2 is a least-squares regression line that shows an upward trend, and forecasts 7.2 units for Week 7. It has a fit error of 15% over the four weeks of history.
- Model 3 is a quadratic equation with a fit error of only 8%, and it forecasts 16.5 units in Week 7.
- Model 4 is a cubic equation that fits the history perfectly (fit error of 0%). It forecasts 125 units in Week 7.
Hitting the "pick best" button would select Model 4, sending signals to your supply chain to start cranking up production. But does this make any sense?
Remember, the objective is not to fit a model to history – it is to find an appropriate model for forecasting future weekly sales.
It so happens that fitting a model to history is easy. A MAPE of zero can be obtained in the fitting phase by using a polynomial of sufficiently high order. (With 4 data points, that would be a cubic equation.) Any forecasting software should be able to do this. And when that software is demonstrated to you by the vendor, most likely what you will be shown is the MAPE of the historical fit. (Note to self: So that's why the MAPE in the demo is always 1%, yet when I start doing actual forecasting, the MAPE is more like 40%!?!?!?)
Having perfect, or even good fit to history is no guarantee that the model will generate accurate forecasts. There is often little relationship between historical fit and the accuracy of future forecasts -- other than that (nearly invariably), forecast accuracy will be worse-to-much worse than historical fit.
In this example, bad software (or misguided forecasters) using fit to history as the sole criterion for selecting the forecasting model would have chosen Model 4, and generated forecasts that appear to be entirely unreasonable. In this case the simplest and worst fitting models, such as the average or trend line, would probably be the most appropriate forecasts given the very limited information we have.