Mean Absolute Percent Error (MAPE) is the most commonly used forecasting performance metric, and for good reason, the most disparaged.
When we compute the absolute percent error the usual way, as
APE = | Forecast - Actual | / Actual
this is undefined when Actual = 0. It can also lead to huge percent error values when Actual is very small compared to the size of the error. (So definitely don't use MAPE to evaluate forecasting performance for intermittent demand.)
Since the usual calculation of APE has these flaws (and at least one more big one, to be discovered below), a veritable cottage industry has evolved around finding alternatives.
Percentage Errors Can Ruin Your Day
In their article "Percentage Errors Can Ruin Your Day" in the latest (Fall 2011) issue of Foresight, Stephan Kolassa and Roland Martin examine four proposed alternatives to the APE calculation:
-
APEf (APE with respect to forecast): | Forecast - Actual | / Forecast
-
sAPE (Symmetric APE): | Forecast - Actual | / ((Forecast + Actual)/2)
-
maxAPE (Max of Actual and Forecast): |Forecast - Actual | / max {Forecast,Actual}
-
tAPE (Truncated APE): min {APE,1)
The authors suggest a simple dice tossing experiment to illustrate the implications for forecasting bias in each of these metrics. Suppose you take a standard six-sided die (assumed to be fair and balanced, not right or left leaning) and use each roll to simulate demand. Since the possible outcomes are 1, 2, 3, 4, 5, or 6, we should be able to agree that 3.5 is the "best" forecast. (Over a large number of rolls, the "average demand" will be very close to 3.5, with over- and under- forecasts roughly equal.) Our forecast of 3.5 for each roll is unbiased -- it won't be chronically too high or too low.
Carry out the tumbling die experiment, measure the forecast errors and bias, and you'll find that each flavor of APE can encourage a biased forecast. For example, if our company uses the standard MAPE to evaluate forecasting performance, we would minimize our forecast errors by always forecasting 2. In fact, always forecasting 1, 2, or 3 will, over the long haul, give us a MAPE less than the MAPE of always forecasting 3.5 or anything above that.
So why should management care? Because if the sole job objective of their forecasters is to minimize MAPE, and those forecasters are smart enough to do the math (or eschew the math and instead read clever articles in Foresight), the forecasters will purposely forecast too low -- perhaps leading to chronic inventory shortages and lousy customer service.
4 Comments
Thanks for your Foresight shoutout, Mike. If your readers would like to check out our clever articles for themselves, they are welcome to download a complimentary issue here.
Pingback: The Objectives of Forecasting: Narrow and Broad - The Business Forecasting Deal
Pingback: Lancaster Centre for Forecasting survey - The Business Forecasting Deal
Pingback: Lancaster Centre for Forecasting survey - supplychain.com