Automatic forecasting and FVA (Part 1 of 2)

0

To properly evaluate (and improve) forecasting performance, we recommend our customers use a methodology called Forecast Value Added (FVA) analysis. FVA lets you identify forecasting process waste (activities that are failing to improve the forecast, or are even making it worse). The objective is to help the organization generate forecasts that are as accurate as can reasonably be expected (given the nature of what they are forecasting), and do this as efficiently as possible (using the fewest resources).

In its simplest form, FVA compares the accuracy of a Naive forecast to the organization’s current forecasting method (usually some form of statistical forecasting with manual adjustments, or even an entirely manual process). FVA can also compare performance to alternative methods, like an automated statistical forecast (which can be generated from software such as SAS® Forecast Server or SAS® Forecasting for Desktop).

  • Naïve forecast: The “no change” model is standard, where the forecast is that there will be no change from the latest observation. If the organization has a supply lead time, such as two months, then the “no change” forecast for a particular month will be the actual observed value from two months prior.  So if 100 units were sold in November, the forecast for January is 100. If 125 are sold in December, then the February forecast is 125, and so on. For highly seasonal data, you may instead use a seasonal "no change" from the corresponding period the prior year. So the forecast for week 5 of 2017 is the actual from week 5 of 2016.
  • Manual forecast: Some organizations do not use forecasting software, and use an entirely manual process where the forecast is based on management judgment.
  • Statistical forecast: Generated by statistical models built by forecasters in forecasting software, based on historical sales alone, or by including additional variables (like pricing, promotions, events, etc.).
  • Automated forecast: Generated entirely automatically by forecasting software with no human intervention (no tuning the models, and no manually adjusting the forecasts).

In the latter two cases, there is often the option to make manual adjustments to the computer generated forecast.

The accuracy of the Naïve forecast serves as a basis for comparison against all other forecasting activities. The Naïve forecast can be created automatically at virtually no cost, so it is important for the organization to understand what accuracy the Naïve forecast can be expected to achieve. If the Naïve’s accuracy is “good enough” for the organization’s planning and decision making purposes, then it makes sense to just use it, and stop doing any manual or statistical forecasting efforts. (Why spend time and money on a costly forecasting process if the Naïve can generate “good enough” forecasts for free?)

In most situations, however, the organization seeks forecasts that are more accurate than what the Naïve can achieve. But a typical forecasting process, even when aided by statistical forecasting software, can consume a lot of management time, at considerable cost.

The FVA approach lets you focus your improvement efforts in areas most in need of improvement – such as those where your accuracy is worse than the Naïve forecast.

Many companies find that overall, their forecasting process is performing better than just using the Naïve forecast. However, there are usually specific areas (products, locations, or other points in the organization’s forecasting hierarchy) where the Naïve forecast performs better. These should be investigated, to see if there are explainable reasons why the forecast is worse than the Naïve, and whether it can be improved. (Often the cause of such poor forecasting is political pressures within the organization. The forecast represents what management wants to happen, rather than being an unbiased best guess of what really will happen.)

Upon doing FVA analysis, a surprisingly large number of companies find that overall they are forecasting worse than doing nothing and just using the Naïve forecast! If the forecasting process cannot be improved in these areas, then it is simply wasted effort, and should be eliminated in favor of using the Naïve.

Organizations often find that, overall, Automated forecasts are more accurate than what their current forecasting process produces. However, there will likely be some areas where the Automated forecast is less accurate than the existing process, or even less accurate than the Naïve forecast. Once these areas are identified, they can be investigated. (It is important to note that for some sales patterns, the naïve “no change” model is the most appropriate forecasting model, and cannot be meaningfully improved upon.)

(to be continued...)

Share

About Author

Mike Gilliland

Product Marketing Manager

Michael Gilliland is a longtime business forecasting practitioner and formerly a Product Marketing Manager for SAS Forecasting. He is on the Board of Directors of the International Institute of Forecasters, and is Associate Editor of their practitioner journal Foresight: The International Journal of Applied Forecasting. Mike is author of The Business Forecasting Deal (Wiley, 2010) and former editor of the free e-book Forecasting with SAS: Special Collection (SAS Press, 2020). He is principal editor of Business Forecasting: Practical Problems and Solutions (Wiley, 2015) and Business Forecasting: The Emerging Role of Artificial Intelligence and Machine Learning (Wiley, 2021). In 2017 Mike received the Institute of Business Forecasting's Lifetime Achievement Award. In 2021 his paper "FVA: A Reality Check on Forecasting Practices" was inducted into the Foresight Hall of Fame. Mike initiated The Business Forecasting Deal blog in 2009 to help expose the seamy underbelly of forecasting practice, and to provide practical solutions to its most vexing problems.

Comments are closed.

Back to Top