Two weeks ago we looked at the first two steps in effecting forecasting process change:
- Justify your suspicions with data
- Communicate your findings
That was the easy part. So why is it that so many organization realize they have a forecasting problem, yet are unable to do anything about it? A new case study by Fildes and Goodwin* (F&G) provides insight on how an inefficient forecasting process (with a forecasting support system at its heart) can exist for many years.
Most forecasting research has been about creating new modeling methods, or evaluating the performance of existing methods. Initiatives like the M4 Forecasting Competition (and the current M5) have contributed greatly to our knowledge in the area of modeling methods.
The objective of the F&G study, however, was to understand how forecasters go about their organizational tasks when using modeling methods through a forecasting support system. They used a case study approach, including direct observation of the forecasting process along with interviews of participants. This allowed for a deep account of how managers use and interact with their forecasting systems, and provides for better understanding of the psychological and political aspects that motivate each individual's behavior.
The study began in 2004, with visits by the researchers to the subject company (a regional subsidiary in the pharmaceutical industry), for interviews and observations.The existing forecasting system was generally well regarded, and thought to provide accuracy improvement over their prior approach (although with no data to support that thought). Nevertheless, a Six Sigma project had been initiated on forecasting because of the amount of effort required to produce forecasts, and concern that accuracy could be better.
The Forecasting Process
Pharmaceutical products commonly follow distinctive life-cycle patterns. The forecasters often used their judgment of what the forecast "should look like" to override the system generated forecast. This became the baseline forecast for consideration in the monthly product group review meetings, where "market intelligence" (MI) was applied by product management, and discussion ensued until forecasts were jointly agreed upon.
An important question at this point, when using the Forecast Value Added mindset, is whether these adjustments to the original statistical forecasts improved accuracy? F&G investigated the effect of judgmental overrides with great difficulty, because the original automatic statistical baseline forecasts were not recorded! (This is an extremely common and unfortunate oversight -- please don't repeat it at your organization!) So the best they could do was simulate what those original forecasts would have been, using software with a similar algorithm.
Analysis suggested the the judgmentally adjusted baseline forecasts were slightly less accurate than the original statistical forecasts -- not much harm done, just a lot of wasted effort. Looking next at the MI adjustments to the baseline forecasts, moderate improvements were sometimes seen. However, just 51.3% of the MI adjustments improved accuracy (the most successful adjustments tended to be larger). And less than 45% of the smallest adjustments improved accuracy.
A good takeaway from this, and similar findings in other studies, is that small adjustments are not worth the effort. Even if the adjustment is directionally correct and makes the forecast more accurate -- will there be any impact? A small adjustment makes, at best, a small improvement in accuracy. If nobody notices, and no better decisions or actions are taken, you've simply wasted time.
To be continued...
Fildes, R. & Goodwin, P. (2020). Stability and innovation in the use of forecasting systems: a case study in a supply-chain company. Department of Management Science Working Paper 2020:1. Lancaster University.