Compare it to predicting the economy.
So concludes an ABC News Australia story by finance reporter Sue Lannin, entitled "Economic forecasts no better than a random walk." The story covers a recent apology by the International Monetary Fund over its estimates for troubled European nations, and an admission by the Reserve Bank of Australia that its economic forecasts were wide of the mark.
An internal study by the RBA found that 70% of its inflation forecasts were close, but its economic growth forecasts were worse, and its unemployment forecasts were no better than a random walk. [Recall the random walk (or "no change" forecasting model) uses the last observed value as the forecast for future values.]
In other words, a bunch of high-priced economists generated forecasts upon which government policies were made, when they could have just ignored (or fired) the economists and made the policies based on the most recent data.
Anyone who has worked in (or paid any attention to) business forecasting will not be surprised by these confessions. Naive forecasts like the random walk or seasonal random walk can be surprisingly difficult to beat. And simple models, like single exponential smoothing, can be even more difficult to beat.
While we assume that our fancy models and elaborate forecasting processes are making dramatic improvements in the forecast, these improvements can be surprisingly small. And frequently, due to use of inappropriate models or methods, and to "political" pressures on forecasting process participants, our costly and time consuming efforts just make the forecast worse.
The conclusion? Everybody needs to do just what these RBA analysts did, and conduct forecast value added analysis. Compare the effectiveness of your forecasting efforts to a placebo -- the random walk forecast. If you aren't doing any better than that, you have some apologizing to do.
4 Comments
Nice post. Measuring the incremental value of forecasting activity over simplistic "naive" models is not a new idea though, it's not difficult and it's really is the only way to know if your forecast activities are adding value. In my experience though relatively few groups actually do it. Is that what everyone else sees? Why would that be?
Hi Andrew, thanks for the comment. While the importance of relative performance (e.g. comparing a medicine against a placebo) has been recognized since the dawn of science, it has been a struggle getting it recognized in business forecasting. This is particularly unfortunate since a naive model is perfectly suited to fill the role of the placebo for making performance comparisons.
As for reasons, is basic scientific method not being taught in business schools? That's all that FVA is -- the application of basic scientific method -- to determine whether our forecasting efforts are having an effect.
At least the number of forecasting practitioners using this approach appears to be increasing. In recent years we've seen many companies talk about this subject publicly (including Newell Rubbermaid, Nestle, Cisco, Intel, RadioShack, AstraZeneca, Yokohama Tire (Canada), Amway, ...)
In my experience the local weather prognosticators :) here in the Asheville NC area tend to exaggerate the severity of the weather but they get their forecasts pretty well right qualitatively, and much better than this method:
Look out the window. Note the weather. What you see is the forecast for tomorrow.
That succeeds better than half the time.
They beat that by a good bit.
This all reminds me of the Curb Your Enthusiasm episode where Larry accuses the weatherman of falsely predicting rain, so all the golf club members will stay home and the weatherman can have the course to himself.
Perhaps this isn't so far fetched. In the book Superforecasting, Tetlock and Gardner point out that self-interest, not just accuracy, can be motivation for any forecaster. The classic example is asking sales people their forecast in order to set the sales quota. It seems likely that the self-interest of having a lower quota will trump any motivation for a more accurate forecast.