Mercifully, we have reached the final installment of Q&A from the June 20 Foresight-SAS webinar, "Forecast Value Added: A Reality Check on Forecasting Practices." As a reminder, a recording of the webinar is available for on-demand review, and the Foresight article (upon which the webinar was based) is available for free download.
Before finishing the Q&A, congratulations to Paul Goodwin, newly named Fellow of the International Institute of Forecasters. Paul delivered the inaugural webinar in the Foresight-SAS series, "Why Should I Trust Your Forecasts," and is the editor of (and a regular contributor to) the Hot New Research column in Foresight.
*** Forecast Value Added Q&A ***
Q: Is there software for doing FVA?
SAS Demand-Driven Forecasting provides some built-in FVA functionality. SAS Forecast Server can perform FVA analysis through a stored process developed by Snurre Jensen in SAS Denmark. With either of these products, you can enhance the FVA functionality by using the SAS language to develop your own analyses and reports. I'm not aware of any other software with built-in FVA capabilities.
Since FVA is not computationally intensive, and the math and reporting are not complicated, it is easy enough to write your own FVA reporting and analysis system. SAS is perfect software for this, and starter packages like SAS Analytics Pro or SAS Visual Data Discovery provide all the tools you need. JMP is also inexpensive and easy to use, and would be a good starting point for FVA.
Excel is sufficient for a quick one-time analysis of a few products, but Excel would be very cumbersome and unsuitable for ongoing FVA tracking and reporting.
Q: How we can convert the FVA in $$ beyond the % by price or cost? I mean, we have another intrinsic cost there?
FVA analysis lets you make assertions such as, "Management overrides made the forecast 5 percentage points worse, on average, than if we had just used the statistical forecast." FVA tells you how much value was added, or taken away, by the steps and participants in your forecasting process -- expressed in terms of MAPE or whatever other forecasting performance metric you use.
If you can derive a reliable dollar amount for the impact of each percent change in forecasting performance, then you could convert the FVA results into dollars. There have been efforts to gauge the value of forecast improvement (a quick Google search leads to many references). One spreadsheet example, Calculate How Much Money You Will Save By Reducing Forecast Error, is available to IBF members.
Q: How to select right level of aggregation for FVA? What forecast horizon and history horizon to take?
Note that different participants may adjust the forecast at different levels, or through different hierarchical arrangements. You'll need to capture the data at the level where changes are made. (For example, a forecast analyst may adjust Item level forecasts, while the marketing manager may overide at a Product Group level, and a general manager or division president may adjust by Brand. It is worthwhile to record the data at all of these levels, to see who is adding value to the forecasting process, and who isn't.)
By collecting data at the most granular level that is forecast, you can always aggregate for analysis and reporting at higher levels.
For FVA analysis, we generally base it on the forecast made at lead time, which is usually a few days to a few months prior to the period being forecast.
The more historical periods over which you have forecasts and actuals, the better. You need to have enough data points to draw legitimate conclusions about whether the process is adding value or not. Over shorter periods of time, the observed differences in performance may be due to chance.
I'll be providing more specific guidelines for data requirements and interpretation in a future blog post.