The Argument for Max(Forecast,Actual)

3

There is a long running debate among forecasting professionals, on whether to use Forecast or Actual in the denominator of your percentage error calculations. The Winter 2009 issue of Foresight had an article by Kesten Green and Len Tashman, reporting on a survey (of the International Institute of Forecasters discussion list and Foresight subcribers) asking:

What should the denominator be when calculating percentage error?

This non-scientific survey's responses were 56% for using Actual, 15% for using Forecast, and 29% for something other, such as the average of Forecast and Actual, or an average of the Actuals in the series, or the absolute average of the period-over-period differences in the data (yielding a Mean Absolute Scaled Error or MASE). One lone respondent favored using the Maximum (Forecast, Actuals).

Fast forward to the new Summer 2010 issue of Foresight (p.46):

Letter to the Editor

I have just read with interest Foresight’s article "Percentage Error: What Denominator" (Winter 2009, p.36). I thought I’d send you a note regarding the response you received to that survey from one person who preferred to use in the denominator the larger value of forecast and actuals.

I also have a preference for this metric in my environment, even though I realize it may not be academically correct. We have managed to gain an understanding at senior executive level that forecast accuracy improvement will drive significant competitive advantage.

I have found over many years in different companies that there is a very different executive reaction to a reported result of 60% forecast accuracy vs. a 40% forecast error— even though they are equivalent! Reporting the perceived, high error has a tendency to generate knee-jerk reactions and drive to the creation of unrealistic goals. Reporting the equivalent accuracy metric tends to cause executives to ask the question “What can we do to improve this?” I know that this is not logical but it is something I have observed time and again and so I now always recommend reporting forecast accuracy to a wider audience.

But if you are going to use forecast accuracy as a metric then, if you have specified the denominator to be either actuals or forecast, you will always have some errors that are greater than 100%. When converting these large errors to accuracy (accuracy being 1 – error) then you end up with a negative accuracy result; this is the type of result that always seems to cause misunderstanding with management teams. A forecast accuracy result of minus 156% just does not seem to be intuitively understandable.

When you use the maximum of forecast or actuals as the denominator, the forecast accuracy metric is constrained between 0 and 100%, making it conceptually easier for a wider audience, including the executive team, to understand.

If the purpose of the metric is to identify areas of opportunity and drive improvement actions, using the larger value as the denominator and reporting accuracy as opposed to error enables the proper diagnostic activities to take place and reduces disruption caused by misinterpretation of the “correct” error metric.

To summarize, I use the larger value methodology for ease of communication to key personnel who are not familiar with the intricacies of forecasting process and measurement.

David Hawitt
SIOP Development Manager for a global technology company
davidhawitt@hotmail.co.uk

I have long favored "Forecast Accuracy" as a metric for management reporting, defining it as:

FA = {1 – [ ∑ |F – A| / ∑ Max (F,A) ] } x 100

where the summation is over n observations of forecasts and actuals. FA is defined to be 100% when both forecast and actual are zero. Here is a sample of the calculation over 6 weeks for two products, X and Y:

As all forecasting performance metrics do, calculating Forecast Accuracy using Max (F,A) in the denominator has its flaws -- and it certainly has an army of detractors. Yet the detractors miss the point that David so nicely makes. We recognize that Max(F,A) is not "academically correct." It lacks properties that would make it useful in other calculations. There is virtually nothing a self-respecting mathematician would find of value in it except it forces the Forecast Accuracy metric to always be scaled between 0-100%, thereby making it an excellent choice for reporting performance to management! If nothing else, it helps you avoid wasting time explaining the weird and non-intuitive values you can get with the usual calculation of performance metrics.

Tags
Share

About Author

Mike Gilliland

Product Marketing Manager

Michael Gilliland is a longtime business forecasting practitioner and formerly a Product Marketing Manager for SAS Forecasting. He is on the Board of Directors of the International Institute of Forecasters, and is Associate Editor of their practitioner journal Foresight: The International Journal of Applied Forecasting. Mike is author of The Business Forecasting Deal (Wiley, 2010) and former editor of the free e-book Forecasting with SAS: Special Collection (SAS Press, 2020). He is principal editor of Business Forecasting: Practical Problems and Solutions (Wiley, 2015) and Business Forecasting: The Emerging Role of Artificial Intelligence and Machine Learning (Wiley, 2021). In 2017 Mike received the Institute of Business Forecasting's Lifetime Achievement Award. In 2021 his paper "FVA: A Reality Check on Forecasting Practices" was inducted into the Foresight Hall of Fame. Mike initiated The Business Forecasting Deal blog in 2009 to help expose the seamy underbelly of forecasting practice, and to provide practical solutions to its most vexing problems.

3 Comments

  1. Agree on that it takes time to explain the underlying assumptions of academic metrices, but even more to explain that overstating the actuals results in a better forecast accuracy, then understating... E.g. (1) Forecast 70, Actual 50 vs (2) Forecast 30, Actual 50 -> difference of 11%.

  2. Really don't understand this as using max on the denominator effectively you will always minimise the size of error as opposed to using a consistent reference. Then dependent on how you calculate a weighted error you are not comparing apples with apples by sku. A simpler way would be to always use either forecast or actuals, absolute the error (I.e ensure it is positive) and then do a max of 0 or the output. Much easier to explain that there is a point where the error is so large it's pointless trying to scale it (in this example the error is as big as the original denominator). Equally when displaying bias you need the same denominator at all times otherwise how do you know direction of the error.

    • Mike Gilliland
      Mike Gilliland on

      I definitely do not suggest using Max(Forecast,Actual) as the denominator in the Bias calculation -- that would make no sense at all. For Bias I use Sum(Forecasts)/Sum(Actuals) and expressed as a positive or negative percent. Thus, if Sum(Forecasts) = 1050 and Sum(Actuals) = 1000, then Bias = 100 * (1050/1000) - 100 = 5%. If instead the Sum(Forecasts) = 975, then Bias = 100 * (975/1000) - 100 = -2.5%.

      The sole reason for using Max(Forecast,Actual) in the denominator of the Forecast Accuracy calculation is that it scales the value to always fall between 0 and 100%. This makes it so simple that even a high-ranking business executive could understand it.

      Of course, forecast analysts, demand planners, and those responsible for various supply chain and inventory management decisions can (and should) use other metrics for their decision making purposes.

      The value of using Max(Forecast,Actual) is in reporting to management -- minimizing the need to spend time explaining things. Everyone understands a performance chart scaled 0-100%, and I would disagree with your contention that it is "Much easier to explain that there is a point where the error is so large it's pointless to try to scale it." If I'm discussing the forecast with management, I don't want to waste time trying to explain that.

      We agree that Max(Forecast,Actual) has many flaws, and that it imperfectly represents what is really going on. But as a practical matter, when dealing with forecasts (which themselves are always imperfect), I'm ok with it as a useful metric for management reporting.

Back to Top