Tag: Steve Morlidge

Mike Gilliland 0
Q&A with Steve Morlidge of CatchBull (Part 2)

Q: ­Do you think the forecaster should distribute forecast accuracy to stakeholders (e.g. to show how good/bad the forecast is) or do you think this will confuse stakeholders? A: This just depends what is meant by stakeholders. And what is meant by forecast accuracy. If stakeholders means those people who

Mike Gilliland 0
Q&A with Steve Morlidge of CatchBull (Part 1)

In a pair of articles published in Foresight, and in his SAS/Foresight webinar "Avoidability of Forecast Error" last November, Steve Morlidge of CatchBull laid out a compelling new approach on the subject of "forecastability." It is generally agreed that the naive model (i.e. random walk or "no change" model) provides

Mike Gilliland 0
The "avoidability" of forecast error (Part 4)

The Empirical Evidence Steve Morlidge presents results from two test datasets (the first with high levels of manual intervention, the second with intermittent demand patterns), intended to challenge the robustness of the avoidability principle. The first dataset contained one year of weekly forecasts for 124 product SKUs at a fast-moving consumer

Mike Gilliland 0
The "avoidability" of forecast error (Part 3)

Suppose we have a perfect forecasting algorithm. This means that we know the "rule" guiding the behavior we are forecasting (i.e., we know the signal), and we have properly expressed the rule in our forecasting algorithm. As long as the rule governing the behavior doesn't change in the future, then any

Mike Gilliland 0
The "avoidability" of forecast error (Part 2)

While I've long advocated the use of Coefficient of Variation (CV) as a quick and dirty indicator of the forecastability of a time-series, its deficiencies are well recognized. It is true that any series with extremely low CV can be forecast quite accurately (using a moving average or simple exponential smoothing

Mike Gilliland 0
The "avoidability" of forecast error (Part 1)

"Forecastability" is a frequent topic of discussion on The BFD, and an essential consideration when evaluating the effectiveness of any forecasting process. A major critique of forecasting benchmarks is that they fail to take forecastability into consideration: An organization with "best in class" forecast accuracy may do so only because

Mike Gilliland 0
Forecast Value Added Q&A (Part 6)

Q: ­Is the MAPE of the naive forecast the basis for understanding the forecastability of the behavior?  Or are there other more in depth ways to measure the forecastability of a behavior? MAPE of the naive forecast indicates the worst you should be able to forecast the behavior. You can

Mike Gilliland 0
Forecast Value Added Q&A (Part 4)

Q: ­What is a legitimate goal to expect from your FVA...5%, 10%? Q: ­How do we set Target FVA which Forecasters can drive towards?­ The appropriate goal is to do no worse than a naive model, that is FVA ≥ 0. Sometimes, especially over short periods of time, you may