When do you stop trying to improve forecast accuracy? (Part 2)

2
Little Richard - a forecasting resource? (see Sidebar below)

 

Last time we saw two situations where you wouldn't bother trying to improve your forecast:

  • When forecast accuracy is "good enough" and is not constraining organizational performance.
  • When the costs and consequences of a less-than-perfect forecast are low.

(Another situation was brought to my attention by Sean Schubert of Valspar: We should stop trying to improve forecasting when management stops asking us, "why are we so bad at forecasting?")

We also saw that if you are forecasting worse than a naive model, then that is no time to stop trying to improve!

Conceptual Limits on Forecast Accuracy

But suppose you are beating the accuracy of a naive model, perhaps handily. This raises questions like, "What is the best accuracy we can achieve?" (If we knew this upper limit, at least we'd know for sure when to stop trying to forecast better.)

Determining the upper limit on forecast accuracy is conceptually very simple: Determine the structure or "rule" governing the behavior, observe how much randomness there is in the behavior outside the rule, and assume that the same rule (and same amount of randomness) continue to govern the behavior into the future.

We would express the rule in a mathematical model and use it to generate forecasts.  If we had correctly figured out the rule, and correctly expressed it in our model, then those forecasts could no longer be improved. The size of the forecast error would be determined by the amount of randomness outside the rule.

But As A Practical Matter...

While all this is conceptually simple, it is a nightmare to implement in reality. We may never know for sure that we have figured out the rule governing the behavior. But even if we have figured it out, there is no promise that the behavior won't change and start following a different rule. And there is no promise the degree of randomness won't change either.

When dealing with the kinds of demand patterns you find for fast moving consumer goods, I've used a practical rule of thumb. If I'm able to reduce forecast error by 10-20% compared to a simple model (moving average or single exponential smoothing), then I'm probably doing about the best I can expect and should move on. (This means, for example, reducing MAPE from 40% to 36%-32%, or improving forecast accuracy from 60% to 64%-68%.) Of course, sometimes the simple model is the most appropriate model for forecasting, and you cannot improve on it at all.

The point it to not beat yourself up, wasting time and resources pursuing a perfect forecast. If you are already beating a simple model by a good amount, you are probably forecasting about the best that can be expected. Unless these are high value forecasts with a lot of impact on the organization, there is no need to squander resources trying to improve them further. Instead, focus resources on something else that matters.

Sidebar: Gerbil Pageants -- A Waste of Valuable Resources?

While we are on the topic of wasted resources, can gerbils teach us anything about business forecasting? Perhaps not.

However, the American Gerbil Society's annual pageant was held earlier this month in Bedford, MA. Although the AGS pageant may have no relevance whatsoever to business forecasting, I know that many of The BFD readers are avid fans of gerbils, and I wanted to share the news.

Per the highly informative AGS website, "A gerbil show is similar to a dog or cat show, except that it is for gerbils." (For those of you who might not know anything about dog or cat shows, they are similar to gerbil shows, except they are for dogs or cats.)

While my favorite contestant, Little Richard, didn't take home the prize, this important event merited full column accounts in The Christian Science Monitor and The Washington Post. As AGS vice-president Donna Anastasi stated, "Anyone can buy a $12 gerbil and get into the sport... It's very fun."

I couldn't agree more.
Share

About Author

Mike Gilliland

Product Marketing Manager

Michael Gilliland is a longtime business forecasting practitioner and formerly a Product Marketing Manager for SAS Forecasting. He is on the Board of Directors of the International Institute of Forecasters, and is Associate Editor of their practitioner journal Foresight: The International Journal of Applied Forecasting. Mike is author of The Business Forecasting Deal (Wiley, 2010) and former editor of the free e-book Forecasting with SAS: Special Collection (SAS Press, 2020). He is principal editor of Business Forecasting: Practical Problems and Solutions (Wiley, 2015) and Business Forecasting: The Emerging Role of Artificial Intelligence and Machine Learning (Wiley, 2021). In 2017 Mike received the Institute of Business Forecasting's Lifetime Achievement Award. In 2021 his paper "FVA: A Reality Check on Forecasting Practices" was inducted into the Foresight Hall of Fame. Mike initiated The Business Forecasting Deal blog in 2009 to help expose the seamy underbelly of forecasting practice, and to provide practical solutions to its most vexing problems.

Back to Top