Forecast Value Added Q&A (Part 6)


Q: ­Is the MAPE of the naive forecast the basis for understanding the forecastability of the behavior?  Or are there other more in depth ways to measure the forecastability of a behavior?

MAPE of the naive forecast indicates the worst you should be able to forecast the behavior. You can use more sophisticated statistical models and human adjustments to try to achieve a better forecast, but how much better you can get is a difficult question. I'll be discussing a new approach to the "avoidability of forecast error" in a forthcoming blog post about a new Foresight article by Steve Morlidge. See the first question in this prior The BFD post for additional references on the topic of forecastability.

Q: ­How would we use naive forecast with different statistical models, within same product family? Are we able to use it?

Yes, you should use a naive model with every product you are forecasting. It becomes the baseline of performance against which all your other forecasting models and processes are compared. If the naive model can forecast item X with a MAPE of 50% and your statistical model forecasts item X with a MAPE of 40% then be happy, your statistical model is "adding value" by generating a more accurate forecast for this item. For other items, using other statistical models, the naive forecast may do better or worse than your other models. That is what you are trying to find out with FVA analysis. (If you find that a fancier model is usually performing worse than the naive model, then scrap the fancy model and use the naive.)­

Q: ­A naive question: can you give some example techniques for naive model and Statistical model, and explain the main difference between them?

A naive model is something simple to compute, requiring the minimum of effort. You can think of a naive model as a "free forecasting system." Why waste company resources in fancy systems and elaborate forecasting processes if they aren't doing any better than a "free" naive model.

Another way to think of it is that a naive model is essentially the simplest statistical forecasting model you can create. Usually we use the random walk (sometimes referred to as NF1) as our naive model. The seasonal random walk (sometimes called NF2) is another traditional example.­

Q: ­I am not clear about the use of naive forecast... what if a company uses stat forecast as the baseline..will that be considered as naive forecast?

No, the statistical forecast generated by the forecasting software would not be considered the naive forecast. (Unless the statistical forecast happens to be a random walk!)

A naive model is the "ultimate baseline" for comparison of forecasting performance. Usually, in doing FVA analysis, our first comparison is the naive forecast vs. the statistical forecast that was generated by the forecasting software. Most organizations then apply manual overrides to the statistical forecast to achieve the final forecast that feeds downstream planning systems. With FVA, we compare all of these to see whether the statistical forecast is better than the naive forecast, and whether management overrides are better than the statistical forecast.

Q: ­When you forecast 12 months ahead, that means, for each month, you will forecast it 12 times. How do you calculate forecast accuracy for each month? How do we put weight to all 12 times we forecast that 1 month? 

I'm not sure there is any reason to try to combine the forecasts over those 12 months, so no need to worry about weighting them.

For reporting forecasting performance, we typically choose the forecast made at the lead time. So if lead time is four months, you evaluate forecasting performance based on the forecast that existed four months prior to the period being forecast. If you'd like to also report forecasting performance 6 months out, 1 month out, etc., go ahead and do that if there is good reason to.

It can be an interesting exercise to compare forecasting performance 12 months out, 11 months out, ... 1 month out. We would expect forecasts to get better as we get closer to the period being forecast, but that isn't always the case. (And if it gets worse, that may be evidence of a misguided "hold and roll" practice.)

You measure forecast accuracy using whatever formula you choose, often some flavor of MAPE.


About Author

Mike Gilliland

Product Marketing Manager

Michael Gilliland is author of The Business Forecasting Deal (the book), editor of Business Forecasting: Practical Problems and Solutions, and Associate Editor of Foresight: The International Journal of Applied Forecasting. He is a longtime business forecasting practitioner, and currently Product Marketing Manager for SAS Forecasting software. Mike serves on the Board of Directors of the International Institute of Forecasters, and received the 2017 Lifetime Achievement award from the Institute of Business Forecasting. He initiated The Business Forecasting Deal (the blog) to help expose the seamy underbelly of forecasting practice, and to provide practical solutions to its most vexing problems.

Leave A Reply

Back to Top