The difficult step in effecting forecasting process change (2 of 2)

0

Fildes and Goodwin (F&G) observed the subject (the regional subsidiary of a pharmaceutical company) was using a statistical forecasting system, but not fully trusting its output. Forecasters were making overrides to the system generated forecast to make it look like what they believed it should (e.g., following a life-cycle curve rather than the linear projection made by the system). And the forecasters' overrides were further adjusted by "market intelligence" from product management.

Statistical Forecast ⇒ Baseline Adjustment ⇒ Market Intelligence Adjustment 

Yet when F&G analyzed the "value added" by all of this management effort, the value was underwhelming (and often negative). "This raises the question: Why did managers in a company operating in a highly competitive environment adopt such an inefficient approach to an activity as crucial as demand forecasting?"

Why the Process Remained

The forecasting system in use was highly regarded, despite weaknesses in its statistical forecasting capabilities, because it allowed for easy manipulation of the baseline forecasts. By classifying the interests of the actors associated with the forecasting process, F&G came to understand the organization's motivation for perpetuating a flawed process. What had formed was a "stable network of aligned interests."

It was like being in the bizarro world (my characterization, not F&G's). "The fact that the forecasting system produced linear extrapolations, when managers perceived underlying trends to be non-linear, was paradoxically a factor that assisted in securing its acceptance. It provided a pretext for interventions..."

Let's look at the interests of process participants:

  • Software vendor: Wants to obtain sales of its forecasting system and maximize profits. By making it easy to adjust system generated forecasts, this made users happy (feeling in control), and also reduced chances the system generated forecasts would be blamed for bad results. Happy users pay for maintenance and upgrades, and untrusted linear forecasts with ease of making adjustments kept the users happy.
  • Logistics managers (creators of the baseline forecast): Wanted to maintain their status with colleagues by producing credible baseline forecasts. They intervened in the system generated forecasts by tweaking the models, or otherwise making adjustments to create a plausible baseline.
  • Middle managers: Want to be seen using an advanced system with sophisticated methods, while still easily able to control the forecasts. Product managers demonstrated their expertise by providing market intelligence, and also could push the forecasts in a direction that suited their interests.
  • Senior managers (including accountants): Wanted timely forecasts to support planning and avoid the costs of forecast errors. The forecasting system generated reports and graphs allowing all process participants to review and provide input, while the senior managers could still oversee and monitor the process.

Thus, all actors had a stake in resisting change to the existing system and process! So despite a Six Sigma initiative to improve forecasting, there was no evidence collected on the value-added on the different tasks that contributed to the final forecast. And there was no serious consideration of finding a more appropriate forecasting system.

Why the Forecasting System Changed

Fourteen years after their initial observations, F&G returned to the subject pharmaceutical subsidiary. They found the once stable system had changed dramatically, with new software and new processes. So what happened?

It turns out the change didn't come from within the subsidiary (from the Six Sigma initiative or the process participants themselves). Rather, change came from imposition of a centralized demand management team at corporate, to align processes and software across all the regional subsidiaries. "The outstanding driver of change here was the top-down requirement for standardization." Whether the change has improved forecasting performance is unclear, however, because there is yet no empirical evidence to substantiate improvement.

What We Can Learn from this Case Study

F&G examined the individual biases and motivations of participants, and the organizational barriers embodied in stable networks. "No elements of the established network facilitated process improvement or provided incentive for individuals to change..." This organizational / psychological perspective should be kept in mind for anyone struggling to effect meaningful change.

We also see the importance of tracking the appropriate data -- here the original system generated forecast, along with all stages of adjustments. This kind of information -- whether management efforts are improving upon the statistical forecast (or even upon a naive forecast) -- is an absolute first step in determining process performance. WIthout at least a rudimentary FVA analysis, you don't really know whether your existing process is good or bad.

Share

About Author

Mike Gilliland

Product Marketing Manager

Michael Gilliland is author of The Business Forecasting Deal (the book), editor of Business Forecasting: Practical Problems and Solutions, and Associate Editor of Foresight: The International Journal of Applied Forecasting. He is a longtime business forecasting practitioner, and currently Product Marketing Manager for SAS Forecasting software. Mike serves on the Board of Directors of the International Institute of Forecasters, and received the 2017 Lifetime Achievement award from the Institute of Business Forecasting. He initiated The Business Forecasting Deal (the blog) to help expose the seamy underbelly of forecasting practice, and to provide practical solutions to its most vexing problems.

Leave A Reply

Back to Top