Why the Attraction for the Offensive Paradigm? In addition to the reasons provided by Green and Armstrong, I'd like to add one more reason for the lure of complexity: You can always add complexity to a model to better fit the history. In fact, you can always create a model
Tag: random walk
And now for the five steps: 1. Ignore industry benchmarks, past performance, arbitrary objectives, and what management "needs" your accuracy to be. Published benchmarks of industry forecasting performance are not relevant. See this prior post The perils of forecasting benchmarks for explanation. Previous forecasting performance may be interesting to know, but
Q: Is the MAPE of the naive forecast the basis for understanding the forecastability of the behavior? Or are there other more in depth ways to measure the forecastability of a behavior? MAPE of the naive forecast indicates the worst you should be able to forecast the behavior. You can
Q: Company always try to forecast 12 or 24m ahead. Whether we should track accuracy of 1m/3m/ 6m or x month forecast, does that depend on lead time? How to determine out of these 12/24 months, which month should we track accuracy? Correct, forecast performance is usually evaluated against the
Q: What is a legitimate goal to expect from your FVA...5%, 10%? Q: How do we set Target FVA which Forecasters can drive towards? The appropriate goal is to do no worse than a naive model, that is FVA ≥ 0. Sometimes, especially over short periods of time, you may
Compare it to predicting the economy. So concludes an ABC News Australia story by finance reporter Sue Lannin, entitled "Economic forecasts no better than a random walk." The story covers a recent apology by the International Monetary Fund over its estimates for troubled European nations, and an admission by the