When Some Information Isn't Necessary (Part 2)

2

In our last installment, we learned that some information is not really necessary. When facilities management dyed the toilet water purple to remind us it is non-potable, it didn't affect my earlier decision not to drink out of the toilet. Sometimes the information we receive as forecasters is not really necessary either.

As forecasters we are bombarded with information. Management revenue targets, sales quotas, marketing plans, new product release schedules, production and inventory numbers, orders, shipments, and customer service levels to name a few. What is just background noise, and what is useful for improving our forecasts? There is a tendency to react to each new piece of information, and adjust our forecasts accordingly. But is this a sensible thing to do?

In a 2006 presentation at the Institute of Business Forecasting, Jack Harwell (now VP of Manufacturing, Distribution & Services at RadioShack), spoke of observing his analysts constantly revising their forecasts based on the latest bits of sales information.* With a background in manufacturing and familiarity with process control methods, Jack suspected the analysts were overreacting to the weekly point-of-sale (POS) reports. He found that only 25% of the analyst overrides were more accurate than a 5-week moving average! So the vast majority of the reaction to new POS information was wasted effort.

[Note: Jack Harwell is delivering the keynote address ("The Perfect Forecast") at the IBF Demand Planning & Forecasting: Best Practices Conference in Dallas, May 4-6.]

In a study of four UK supply chain companies, Paul Goodwin and Robert Fildes looked into the magnitude of manual adjustments being made to the statistical forecast. While this was not a study of the kind of information received, the authors did study the size and direction of adjustments to a statistical forecast -- and whether those manual adjustments made the forecast any better.

Of the 60,000 forecasts in the study, 75% of them were manually adjusted, so a huge amount of labor was expended. But how much did it help?

This chart (from Fildes and Goodman, "Good and Bad Judgment in Forecasting: Lessons from Four Companies," Foresight: The International Journal of Applied Forecasting, Issue 8, Fall 2007, pp. 5-10. Courtesy of Len Tashman, Editor.) shows the percentage improvement (i.e. value added) by manual overrides at one of these companies. Overrides are grouped into quartiles based on the size of the adjustment.

Among the noteworthy findings was that small adjustments had essentially no impact on forecast accuracy (see the first two quartiles). These small adjustments were simply a waste of time, as they did not make the forecast meaningfully better or worse. However, large adjustments, particularly large downward adjustments, tended to be beneficial.

The conclusion I draw is that if you have a decently performing statistical forecast, it may make little sense to adjust it unless your new information changes the forecast significantly. Making small adjustments, based on each little piece of new information that comes in, will have no meaningful impact on accuracy anyway -- so is just not worth the effort.

Tags
Share

About Author

Mike Gilliland

Product Marketing Manager

Michael Gilliland is a longtime business forecasting practitioner and formerly a Product Marketing Manager for SAS Forecasting. He is on the Board of Directors of the International Institute of Forecasters, and is Associate Editor of their practitioner journal Foresight: The International Journal of Applied Forecasting. Mike is author of The Business Forecasting Deal (Wiley, 2010) and former editor of the free e-book Forecasting with SAS: Special Collection (SAS Press, 2020). He is principal editor of Business Forecasting: Practical Problems and Solutions (Wiley, 2015) and Business Forecasting: The Emerging Role of Artificial Intelligence and Machine Learning (Wiley, 2021). In 2017 Mike received the Institute of Business Forecasting's Lifetime Achievement Award. In 2021 his paper "FVA: A Reality Check on Forecasting Practices" was inducted into the Foresight Hall of Fame. Mike initiated The Business Forecasting Deal blog in 2009 to help expose the seamy underbelly of forecasting practice, and to provide practical solutions to its most vexing problems.

2 Comments

  1. Pingback: Editorial comment: Forecast accuracy vs. effort - The Business Forecasting Deal

  2. Doug Jennings on

    Michael,

    I read your book just over a year ago and I remember there was a section that talked about the differences between a top down ,middle out, and bottom up forecast. I am trying to reference that / you in my proposal to build a middle out process. I loved the concept, and I reference your book often. Can you tell me where that section is in the book, or can you elaborate on that topic outside of the book?

    FYI; I can’t view this blog from my work computer as our IT department blocks the sight. Can you email me your response?

    Thanks,
    Doug

Back to Top