5 steps to setting forecasting performance objectives (Part 2)

2

And now for the five steps:

1. Ignore industry benchmarks, past performance, arbitrary objectives, and what management "needs" your accuracy to be.

Published benchmarks of industry forecasting performance are not relevant. See this prior post The perils of forecasting benchmarks for explanation.

Previous forecasting performance may be interesting to know, but not relevant to setting next year's objectives. We have no guarantee that next year's data will be equally forecastable. For example, what if a retailer switches a product from everyday low pricing (which generated stable demand) to high-low pricing (where alternating on and off promotion will generate highly volatile demand). You cannot expect to forecast the volatile demand as accurately as the stable demand.

And of course, arbitrary objectives (like "All MAPEs < 20%") or what management "feels it needs" to run a profitable business, are inappropriate.

2. Consider forecastability...but realize you don't know what it will be next year.

Forecast accuracy objectives should be set based on the "forecastability" of what you are trying to forecast. If something has smooth and stable behavior, then we ought to be able to forecast it quite accurately. If it has wild, volatile, erratic behavior, then we can't have such lofty accuracy expectations.

While it is easy to look back on history and see which patterns were more or less forecastable, we don't have that knowledge of the future. We don't know, in advance, whether product X or product Y will prove to be more forecastable, so we can't set a specific accuracy target for them.

3. Do no worse than the naïve model.

Every forecaster should be required to take the oath, "First, do no harm." Doing harm is doing something that makes the results worse than doing nothing. And in forecasting, doing nothing is utilizing the naïve model (i.e., random walk, aka "no change"model) where your forecast of the future is your most recent "actual" value. (So if you sold 50 last week, your forecast for future weeks is 50. If you actually sell 60 this week, your forecast for future weeks becomes 60. Etc.)

You don't need fancy systems or people or processes to generate a naïve forecast -- it is essentially free. So the most basic (and most pathetic) minimum performance requirement for any forecaster is to do no worse than the naïve forecast.

4. Irritate management by not committing to specific numerical forecast accuracy objectives.

It is generally agreed that a forecasting process should do no worse than the naïve model. Yet in real life, perhaps 50% of business forecasts fail to achieve this embarrassingly low threshold. (See Steve Morlidge's recent articles in Foresight, which have been covered in previous BFD blog posts). Since we do not yet know how well the naïve model will forecast in 2015, we cannot set a specific numerical accuracy objective. So the 2015 objective can only be "Do no worse than the naïve model."

If you are a forecaster, it can be reckless and career threatening to commit to a more specific objective.

5. Track performance over time.

Once we are into 2015 and the "actuals" start rolling in each period, we can compare our forecasting performance to the performance of the naïve model. Of course you cannot jump to any conclusions with just a few periods of data, but over time you may be able to discern whether you, or the naïve model, is performing better.

Always start your analysis with the null hypotheses, H0: There is no difference in performance. Until there is sufficient data to reject H0, you cannot claim to be doing better (or worse) than the naïve model.

Promotional Discount for APICS2014APICS2014 logo

If you'd like to hear more about setting performance objectives, and talk in person about your issues, please join me at APICS2014 in New Orleans (October 19-22). As a conference speaker, I've been authorized to provide an RSVP code NOLAF&F for you to receive a $100 discount off your registration at www.apicsconference.org.

My presentation "What Management Must Know About Forecasting" is Sunday October 19, 8:00-9:15 am -- just in time for you to be staggering in from Bourbon St.

 

Share

About Author

Mike Gilliland

Product Marketing Manager

Michael Gilliland is a longtime business forecasting practitioner and formerly a Product Marketing Manager for SAS Forecasting. He is on the Board of Directors of the International Institute of Forecasters, and is Associate Editor of their practitioner journal Foresight: The International Journal of Applied Forecasting. Mike is author of The Business Forecasting Deal (Wiley, 2010) and former editor of the free e-book Forecasting with SAS: Special Collection (SAS Press, 2020). He is principal editor of Business Forecasting: Practical Problems and Solutions (Wiley, 2015) and Business Forecasting: The Emerging Role of Artificial Intelligence and Machine Learning (Wiley, 2021). In 2017 Mike received the Institute of Business Forecasting's Lifetime Achievement Award. In 2021 his paper "FVA: A Reality Check on Forecasting Practices" was inducted into the Foresight Hall of Fame. Mike initiated The Business Forecasting Deal blog in 2009 to help expose the seamy underbelly of forecasting practice, and to provide practical solutions to its most vexing problems.

Back to Top