Q: How would you set the target for demand planners: all products at 0.7? All at practical limit (0.5)? A: In principle, forecasts are capable of being brought to the practical limit of an RAE of 0.5. Whether it is sensible to attempt to do this for all products irrespective
Tag: Steve Morlidge
Q: How important is it to recognize real trend change in noisy data? A: It is very important. In fact the job of any forecast algorithm is to predict the signal – whether it is trending or not – and to ignore the noise. Unfortuantely this is not easy to
Q: Do you think the forecaster should distribute forecast accuracy to stakeholders (e.g. to show how good/bad the forecast is) or do you think this will confuse stakeholders? A: This just depends what is meant by stakeholders. And what is meant by forecast accuracy. If stakeholders means those people who
In a pair of articles published in Foresight, and in his SAS/Foresight webinar "Avoidability of Forecast Error" last November, Steve Morlidge of CatchBull laid out a compelling new approach on the subject of "forecastability." It is generally agreed that the naive model (i.e. random walk or "no change" model) provides
A recurring question among business forecasters is how to incorporate input from the sales force. We discussed this last year in The BFD post "Role of the sales force in forecasting." But the question came up again this week in the Institute of Business Forecasting discussion group on LinkedIn, where
High in the mountains of Colorado, Foresight editor-in-chief Len Tashman previews the new issue: What proficiencies are essential for today’s business forecasters and planners? Sujit Singh offers a detailed and quite formidable list in Critical Skills for the Business Forecaster, our feature article in this 32nd issue of Foresight. While
The Empirical Evidence Steve Morlidge presents results from two test datasets (the first with high levels of manual intervention, the second with intermittent demand patterns), intended to challenge the robustness of the avoidability principle. The first dataset contained one year of weekly forecasts for 124 product SKUs at a fast-moving consumer
Suppose we have a perfect forecasting algorithm. This means that we know the "rule" guiding the behavior we are forecasting (i.e., we know the signal), and we have properly expressed the rule in our forecasting algorithm. As long as the rule governing the behavior doesn't change in the future, then any
While I've long advocated the use of Coefficient of Variation (CV) as a quick and dirty indicator of the forecastability of a time-series, its deficiencies are well recognized. It is true that any series with extremely low CV can be forecast quite accurately (using a moving average or simple exponential smoothing
"Forecastability" is a frequent topic of discussion on The BFD, and an essential consideration when evaluating the effectiveness of any forecasting process. A major critique of forecasting benchmarks is that they fail to take forecastability into consideration: An organization with "best in class" forecast accuracy may do so only because
Q: Is the MAPE of the naive forecast the basis for understanding the forecastability of the behavior? Or are there other more in depth ways to measure the forecastability of a behavior? MAPE of the naive forecast indicates the worst you should be able to forecast the behavior. You can
Q: What is a legitimate goal to expect from your FVA...5%, 10%? Q: How do we set Target FVA which Forecasters can drive towards? The appropriate goal is to do no worse than a naive model, that is FVA ≥ 0. Sometimes, especially over short periods of time, you may
Please enjoy a much-needed break from FVA Q&A with editor Len Tashman's preview of the Summer 2013 issue of Foresight: Enlightenment has been our guiding principle through this, our 30th issue of Foresight. Since the journal’s inception in 2005, our mission has been to help the forecasting profession come to