2 first steps to effecting forecasting process change

0

What if you suspect something is wrong with your forecasting process? What if the process is consuming too much time and too many resources, while still delivering unsatisfactory results (lousy forecasts). What can you do about it?

This post looks at the first two steps to effecting meaningful forecasting process change -- the easy part! In the next post, we'll look at a fresh case study by Fildes & Goodwin* that considers the impetus for adopting new forecasting systems and processes. (Spoiler Alert: What makes change finally happen may not be what you think.)

1. Justify Your Suspicions With Data

If you are in an organization that records its forecasts and actuals, that's a good start! You are already ahead of the surprisingly (and unfortunately) many companies that fail to track even these most rudimentary building blocks of performance.

But simply tracking your forecasts and actuals -- so you can compute MAPE or any other preferred error metric -- is not enough. (I'll keep referring to MAPE, although this discussion applies to whatever error metric you are using.) The problem is that a traditional error metric such as MAPE, by itself, tells you nothing but the magnitude of your forecast error. By itself, MAPE doesn't let you draw conclusions about whether your forecasting process is good or bad.

Here is where Forecast Value Added (FVA) analysis steps in. (If you are unfamiliar with FVA, learn more in "FVA: A reality check on forecasting practices" in Foresight: The international Journal of Applied Forecasting (Spring 2013), or download the SAS whitepaper, Forecast Value Added Analysis: Step by Step.)

In its simplest application, FVA compares the performance your process is achieving with the performance you would have achieved by doing nothing, and just using a naive forecast (such as the "no-change" model). This is analogous to comparing the efficacy of a new drug to a placebo. If the new drug (or your forecasting process) doesn't deliver meaningful improvement versus the placebo (or the no-change forecast), then why bother?

In its more advanced (but still simple) application, FVA compares the performance of sequential steps in the forecasting process. A typical process may be something like this:

Statistical Forecast ⇒ Analyst Override ⇒ Consensus Override ⇒ Executive Override ⇒ Final Forecast

An FVA analysis would determine whether each step is having an effect (good or bad) versus the prior steps. Again, if a step is not making the forecast any better, then why bother?

2. Communicate Your Findings

Business is a cruel world. Nobody cares about your "feelings" that the forecasting process is defective. And maybe nobody will care when the hard data of an FVA analysis demonstrates your process is defective. But FVA can be weaponized, if necessary, to identify and shame the process owner and/or participants that are failing to add value.

Monetizing FVA results, when possible, can be even more effective than shame in bringing attention to your findings.

Better forecasts (presumably) lead to better business decisions (leading to lower costs, higher revenue, and more profit). However, there is a non-trivial connection between forecast improvements and overall business performance that should not be overlooked.

For example, if the forecast accuracy improvement is small, it might not be noticed, and therefore not result in any different decisions. This is one reason why ROI calculators, which purport to show the monetary value of each percentage point of accuracy improvement, should be viewed with much skepticism. (For a similar, skeptical view along these lines, see Lora Cecere's Forbes.com article "Does Better Forecasting Improve Inventory? Why I Don't Think So Anymore".)

Assuming the easy part is now done, and that it is obvious to those in power that process change is necessary,  why is it still so difficult to make change happen? Why are process owners and participants motivated to maintain the status quo, even when process shortcomings are obvious? We'll look next at the research by Fildes & Goodwin.

————————

* Fildes, R. & Goodwin, P. (2020). Stability and innovation in the use of forecasting systems: a case study in a supply-chain company. Department of Management Science Working Paper 2020:1. Lancaster University.

Tags
Share

About Author

Mike Gilliland

Product Marketing Manager

Michael Gilliland is a longtime business forecasting practitioner and formerly a Product Marketing Manager for SAS Forecasting. He is on the Board of Directors of the International Institute of Forecasters, and is Associate Editor of their practitioner journal Foresight: The International Journal of Applied Forecasting. Mike is author of The Business Forecasting Deal (Wiley, 2010) and former editor of the free e-book Forecasting with SAS: Special Collection (SAS Press, 2020). He is principal editor of Business Forecasting: Practical Problems and Solutions (Wiley, 2015) and Business Forecasting: The Emerging Role of Artificial Intelligence and Machine Learning (Wiley, 2021). In 2017 Mike received the Institute of Business Forecasting's Lifetime Achievement Award. In 2021 his paper "FVA: A Reality Check on Forecasting Practices" was inducted into the Foresight Hall of Fame. Mike initiated The Business Forecasting Deal blog in 2009 to help expose the seamy underbelly of forecasting practice, and to provide practical solutions to its most vexing problems.

Comments are closed.

Back to Top