Journal of Business Forecasting columnist Larry Lapide is a longtime favorite of mine. As an industry analyst at AMR, and more recently as an MIT Research Affiliate, Larry's quarterly column is a perpetual source of guidance for the practicing business forecaster. No wonder he received IBF's 2012 Lifetime Achievement in Business Forecasting award.
In the Fall 2015 issue, Larry takes another look at the hot topic of "forecastability" -- something he first touched on in his Winter 1998/99 column "Forecasting is About Understanding Variations."
In the earlier article, Larry introduced the metric Percent of Variation Explained (PVE), where
PVE = 100 x (1 - MAPE/MAPV)
In this formula, the Mean Absolute Percent Variation (MAPV) is simply the MAPE that would have been achieved had you forecast the mean demand every period. So when you compare the MAPE of your real forecasts to the MAPV, this provides an indication of whether you have "added value" by forecasting better than just using the mean.
In spirit this approach is very similar to Forecast Value Added (FVA) analysis (which compares your forecasting performance to a naive model). As I discussed in The Business Forecasting Deal (the book), PVE is analogous to conducting FVA analysis over some time frame and using the mean demand over that time frame as the naive or "placebo" forecast.
The benefit of Lapide's approach is that it provides a quick and easy way to answer a question like "Would we have forecasted better [last year]by just using the year's average weekly demand as our forecast each week?" However, this method is not meant to replace full and proper FVA analysis because it does not make a fair comparison; the forecaster does not know in advance what mean demand for the year is going to be. Mean demand is not an appropriate placebo forecast for FVA analysis because we don't know until the year is over what mean demand turns out to be. This violates a principle for selection of the placebo, that it should be a legitimate forecasting method that the organization could use. (p. 108)
In short, we could never use the mean demand for the year as our real-life operating forecast, because we don't know what the mean demand is until the year is over!
Larry's Fall 2015 JBF column looks at how forecastability can be used to segment an organization's product portfolio, and guide the efforts of the forecaster or planner. A number of different segmentation schemes have been proposed, for example:
- Alan Milliken of BASF (described in Larry's Fall 2009 JBF column)
- Charlie Chase of SAS (in Demand-Driven Forecasting (2nd edition), p. 100 )
- Eric Wilson of Tempur Sealy (see his webinar Risk Mitigation and Demand Planning Segmentation Strategies)
- Marcel Baumgartner of Nestle (see his SAS Global Forum paper How Predictive Analytics Turns Mad Bulls into Predictable Animals)
- Steve Morlidge of CatchBull (see his presentation Managing Forecast Performance from the 2016 International Symposium on Forecasting and several recent articles in Foresight).
Why Knowing Forecastability Might Save Your Job
Management is fond of handing out performance goals, and inappropriate goals can get a forecaster in a lot of trouble. So it is essential for the forecaster to understand what forecast accuracy is reasonable to expect for any given demand pattern, and be able to push back when necessary.
Lapide argues that "sustained credibility" is the most important part of a forecaster's job review. This means management is willing to trust your analysis and judgment, that you are delivering the most accurate forecast that can reasonably be expected, even if the accuracy is not as high as they would like.
Being able to explain what is reasonable to expect -- even it is not what management wants to hear -- can establish that credibility.
(For more information, see 5 Steps to Setting Forecasting Performance Objectives.)