Editorial comment: Forecast accuracy vs. effort

1

Let's end 2012-Q1 with a graphic editorial comment:

Forecast Accuracy vs. Effort

Using a naïve model will achieve a certain level of forecast accuracy. That accuracy may be high if the demand is smooth and stable, or low if the demand is erratic. But you achieve this level of accuracy with virtually no cost or effort.

By using good forecasting software that provides automatic modeling, you should achieve better forecast accuracy with slightly more cost and effort. If you have access to more sophisticated forecasting software along with a skilled analyst to fine-tune your models, you may be able to achieve even more accuracy, but with much more cost and effort. If these are important, high-value forecasts, then the extra cost and effort can be worth it.

For example, if you are a retailer selling the latest $5000 TV, you want that forecast to be as accurate as possible because mistakes are expensive. You don't want to forecast too low, carry too little inventory, and miss sales opportunities. Yet you don't want to forecast too high, fail to hit revenue projections, and be stuck with excess inventory.)

On the other hand, if you are a hardware store selling inexpensive nuts & bolts, you don’t need to worry too much about accurate forecasting of these items because there is very little financial impact. You can rely on simple inventory management policies (e.g. two bin system) to keep from going out-of-stock and annoying your customers.

Turning now to the left side of the chart, we find that the extra effort of making manual overrides actually makes the forecast worse in many situations! (Of course, manual overrides aren't always non-value adding, but they are often enough to keep this consideration in mind. See this discussion of research by Fildes and Goodwin.)

If you have reasonably stable demand and are getting usable forecasts from your automated statistical modeling, then there may be little need to provide manual overrides. Human judgment can then be utilized only when necessary, such as for new item forecasts, or when there has been some fundamental structural change in the demand pattern that the computer system does not yet know about.

Finally, if you want to get really bad forecasts and spend a lot of company resources doing it, let executive management have final approval of all your forecasts! This is the message that Fred Torbert conveyed in his Deadly Sin #5: Senior Management Meddling.

Tags
Share

About Author

Mike Gilliland

Product Marketing Manager

Michael Gilliland is a longtime business forecasting practitioner and formerly a Product Marketing Manager for SAS Forecasting. He is on the Board of Directors of the International Institute of Forecasters, and is Associate Editor of their practitioner journal Foresight: The International Journal of Applied Forecasting. Mike is author of The Business Forecasting Deal (Wiley, 2010) and former editor of the free e-book Forecasting with SAS: Special Collection (SAS Press, 2020). He is principal editor of Business Forecasting: Practical Problems and Solutions (Wiley, 2015) and Business Forecasting: The Emerging Role of Artificial Intelligence and Machine Learning (Wiley, 2021). In 2017 Mike received the Institute of Business Forecasting's Lifetime Achievement Award. In 2021 his paper "FVA: A Reality Check on Forecasting Practices" was inducted into the Foresight Hall of Fame. Mike initiated The Business Forecasting Deal blog in 2009 to help expose the seamy underbelly of forecasting practice, and to provide practical solutions to its most vexing problems.

Back to Top