Tumbling dice


Mean Absolute Percent Error (MAPE) is the most commonly used forecasting performance metric, and for good reason, the most disparaged.

When we compute the absolute percent error the usual way, as

APE = | Forecast - Actual | / Actual

this is undefined when Actual = 0.  It can also lead to huge percent error values when Actual is very small compared to the size of the error. (So definitely don't use MAPE to evaluate forecasting performance for intermittent demand.)

Since the usual calculation of APE has these flaws (and at least one more big one, to be discovered below), a veritable cottage industry has evolved around finding alternatives.

Percentage Errors Can Ruin Your Day

In their article "Percentage Errors Can Ruin Your Day" in the latest (Fall 2011) issue of Foresight,  Stephan Kolassa and Roland Martin examine four proposed alternatives to the APE calculation:

  • APEf (APE with respect to forecast): | Forecast - Actual | / Forecast
  • sAPE (Symmetric APE): | Forecast - Actual | / ((Forecast + Actual)/2)
  • maxAPE (Max of Actual and Forecast): |Forecast - Actual | / max {Forecast,Actual}
  • tAPE (Truncated APE): min {APE,1)

The authors suggest a simple dice tossing experiment to illustrate the implications for forecasting bias in each of these metrics.  Suppose you take a standard six-sided die (assumed to be fair and balanced, not right or left leaning) and use each roll to simulate demand.  Since the possible outcomes are 1, 2, 3, 4, 5, or 6, we should be able to agree that 3.5 is the "best" forecast.  (Over a large number of rolls, the "average demand" will be very close to 3.5, with over- and under- forecasts roughly equal.)  Our forecast of 3.5 for each roll is unbiased -- it won't be chronically too high or too low.

Carry out the tumbling die experiment, measure the forecast errors and bias, and you'll find that each flavor of APE can encourage a biased forecast.  For example, if our company uses the standard MAPE to evaluate forecasting performance, we would minimize our forecast errors by always forecasting 2.  In fact, always forecasting 1, 2, or 3 will, over the long haul, give us a MAPE less than the MAPE of always forecasting 3.5 or anything above that.

So why should management care?  Because if the sole job objective of their forecasters is to minimize MAPE, and those forecasters are smart enough to do the math (or eschew the math and instead read clever articles in Foresight), the forecasters will purposely forecast too low -- perhaps leading to chronic inventory shortages and lousy customer service.





About Author

Mike Gilliland

Product Marketing Manager

Michael Gilliland is author of The Business Forecasting Deal (the book), editor of Business Forecasting: Practical Problems and Solutions, and Associate Editor of Foresight: The International Journal of Applied Forecasting. He is a longtime business forecasting practitioner, and currently Product Marketing Manager for SAS Forecasting software. Mike serves on the Board of Directors of the International Institute of Forecasters, and received the 2017 Lifetime Achievement award from the Institute of Business Forecasting. He initiated The Business Forecasting Deal (the blog) to help expose the seamy underbelly of forecasting practice, and to provide practical solutions to its most vexing problems.


  1. Pingback: The Objectives of Forecasting: Narrow and Broad - The Business Forecasting Deal

  2. Pingback: Lancaster Centre for Forecasting survey - The Business Forecasting Deal

  3. Pingback: Lancaster Centre for Forecasting survey - supplychain.com

Leave A Reply

Back to Top