When do you stop trying to improve forecast accuracy? (Part 1)

1

If the popularity of one's blog can be measured by the number of comments received, then The BFD has become quite popular.

Many of the comments are quite flattering, such as:

  • Hello, I check your blog like еvery week. Үour writing style is wittу, keep doing ωhat you're doing!
  • Vеry shortly this site will be famοus аmong all blogging and ѕitе-building users, ԁue to it's pleasant posts
  • Oh my goodness! Incredible article dude! Many thanks

Some are more tempered in their enthusiasm:

  • Looking at this article reminds me of my previous roommate!
  • The idea reminds myself a small amount of lipstick removal that's another account wholly.

A few are quite harsh in their critique:

  • I used to be able to find good information from your blog posts.

And one was just difficult to interpret:

  • I have read not one article on your blog. You’re a big lad.

(Any idea what that is supposed to mean?) I just hope our newly installed spam filtering software doesn't intercept all these...I'll miss them.

Q:When Do You Stop Trying to Improve Forecast Accuracy?

Despite the popular exhortations to never quit, never give up, and never stop improving, there may be some good reasons to stop trying to improve your forecast, and focus resources elsewhere. Some rules of thumb:

1. Is your forecast accuracy good enough to meet your business needs? If so, don't waste resources building fancier models or developing a more elaborate process. If forecast accuracy is not constraining your overall performance, move on to the next problem.

2. Have you considered the consequences of a less-than-perfect forecast? If the costs and consequences are small, why waste time trying to get great forecasts? Or at least focus any improvement efforts on those forecasts that have the most impact on your business.

On the other hand, if you've conducted a rudimentary FVA analysis and determined that you are forecasting worse than a naive model, then this is no time to quit trying. The most fundamental objective of the forecaster is "First, do no harm." If all your people and software and elaborate processes are performing worse than a naive model, then there is room for improvement.

More rules of thumb in the next installment...

Share

About Author

Mike Gilliland

Product Marketing Manager

Michael Gilliland is author of The Business Forecasting Deal (the book), and editor of Business Forecasting: Practical Problems and Solutions. He is a longtime business forecasting practitioner, and currently Product Marketing Manager for SAS Forecasting software. Mike serves on the Board of Directors for the International Institute of Forecasters, and received the 2017 Lifetime Achievement in Business Forecast award from the Institute of Business Forecasting. He initiated The Business Forecasting Deal (the blog) to help expose the seamy underbelly of forecasting practice, and to provide practical solutions to its most vexing problems.

1 Comment

  1. Sean Schubert on

    We should stop trying to improve forecasting when management stops asking us, "why are we so bad at forecasting?"

    Or:

    1) After we've automated FVA data collection for replenishment products (not always easy), and we consistently review our results and make changes to our forecasting process based on our findings.

    2) After we've build a tight feedback loop on forecasting new products, and we use historical results to 'temper' our enthusiasm if we've tended to be over-optimistic in the past (or vice versa).

    Summary:
    The best way to make the forecast better, is to stop making it worse, and you're going to need data for that!

Leave A Reply

Back to Top