Forecast Value Added Q&A (Part 6)

0

Q: ­Is the MAPE of the naive forecast the basis for understanding the forecastability of the behavior?  Or are there other more in depth ways to measure the forecastability of a behavior?

MAPE of the naive forecast indicates the worst you should be able to forecast the behavior. You can use more sophisticated statistical models and human adjustments to try to achieve a better forecast, but how much better you can get is a difficult question. I'll be discussing a new approach to the "avoidability of forecast error" in a forthcoming blog post about a new Foresight article by Steve Morlidge. See the first question in this prior The BFD post for additional references on the topic of forecastability.

Q: ­How would we use naive forecast with different statistical models, within same product family? Are we able to use it?

Yes, you should use a naive model with every product you are forecasting. It becomes the baseline of performance against which all your other forecasting models and processes are compared. If the naive model can forecast item X with a MAPE of 50% and your statistical model forecasts item X with a MAPE of 40% then be happy, your statistical model is "adding value" by generating a more accurate forecast for this item. For other items, using other statistical models, the naive forecast may do better or worse than your other models. That is what you are trying to find out with FVA analysis. (If you find that a fancier model is usually performing worse than the naive model, then scrap the fancy model and use the naive.)­

Q: ­A naive question: can you give some example techniques for naive model and Statistical model, and explain the main difference between them?

A naive model is something simple to compute, requiring the minimum of effort. You can think of a naive model as a "free forecasting system." Why waste company resources in fancy systems and elaborate forecasting processes if they aren't doing any better than a "free" naive model.

Another way to think of it is that a naive model is essentially the simplest statistical forecasting model you can create. Usually we use the random walk (sometimes referred to as NF1) as our naive model. The seasonal random walk (sometimes called NF2) is another traditional example.­

Q: ­I am not clear about the use of naive forecast... what if a company uses stat forecast as the baseline..will that be considered as naive forecast?

No, the statistical forecast generated by the forecasting software would not be considered the naive forecast. (Unless the statistical forecast happens to be a random walk!)

A naive model is the "ultimate baseline" for comparison of forecasting performance. Usually, in doing FVA analysis, our first comparison is the naive forecast vs. the statistical forecast that was generated by the forecasting software. Most organizations then apply manual overrides to the statistical forecast to achieve the final forecast that feeds downstream planning systems. With FVA, we compare all of these to see whether the statistical forecast is better than the naive forecast, and whether management overrides are better than the statistical forecast.

Q: ­When you forecast 12 months ahead, that means, for each month, you will forecast it 12 times. How do you calculate forecast accuracy for each month? How do we put weight to all 12 times we forecast that 1 month? 

I'm not sure there is any reason to try to combine the forecasts over those 12 months, so no need to worry about weighting them.

For reporting forecasting performance, we typically choose the forecast made at the lead time. So if lead time is four months, you evaluate forecasting performance based on the forecast that existed four months prior to the period being forecast. If you'd like to also report forecasting performance 6 months out, 1 month out, etc., go ahead and do that if there is good reason to.

It can be an interesting exercise to compare forecasting performance 12 months out, 11 months out, ... 1 month out. We would expect forecasts to get better as we get closer to the period being forecast, but that isn't always the case. (And if it gets worse, that may be evidence of a misguided "hold and roll" practice.)

You measure forecast accuracy using whatever formula you choose, often some flavor of MAPE.

Share

About Author

Mike Gilliland

Product Marketing Manager

Michael Gilliland is a longtime business forecasting practitioner and formerly a Product Marketing Manager for SAS Forecasting. He is on the Board of Directors of the International Institute of Forecasters, and is Associate Editor of their practitioner journal Foresight: The International Journal of Applied Forecasting. Mike is author of The Business Forecasting Deal (Wiley, 2010) and former editor of the free e-book Forecasting with SAS: Special Collection (SAS Press, 2020). He is principal editor of Business Forecasting: Practical Problems and Solutions (Wiley, 2015) and Business Forecasting: The Emerging Role of Artificial Intelligence and Machine Learning (Wiley, 2021). In 2017 Mike received the Institute of Business Forecasting's Lifetime Achievement Award. In 2021 his paper "FVA: A Reality Check on Forecasting Practices" was inducted into the Foresight Hall of Fame. Mike initiated The Business Forecasting Deal blog in 2009 to help expose the seamy underbelly of forecasting practice, and to provide practical solutions to its most vexing problems.

Comments are closed.

Back to Top