How to make weather forecasting look good

4

Compare it to predicting the economy.

So concludes an ABC News Australia story by finance reporter Sue Lannin, entitled "Economic forecasts no better than a random walk." The story covers a recent apology by the International Monetary Fund over its estimates for troubled European nations, and an admission by the Reserve Bank of Australia that its economic forecasts were wide of the mark.

An internal study by the RBA found that 70% of its inflation forecasts were close, but its economic growth forecasts were worse, and its unemployment forecasts were no better than a random walk. [Recall the random walk (or "no change" forecasting model) uses the last observed value as the forecast for future values.]

In other words, a bunch of high-priced economists generated forecasts upon which government policies were made, when they could have just ignored (or fired) the economists and made the policies based on the most recent data.

Anyone who has worked in (or paid any attention to) business forecasting will not be surprised by these confessions. Naive forecasts like the random walk or seasonal random walk can be surprisingly difficult to beat. And simple models, like single exponential smoothing, can be even more difficult to beat.

While we assume that our fancy models and elaborate forecasting processes are making dramatic improvements in the forecast, these improvements can be surprisingly small. And frequently, due to use of inappropriate models or methods, and to "political" pressures on forecasting process participants, our costly and time consuming efforts just make the forecast worse.

The conclusion? Everybody needs to do just what these RBA analysts did, and conduct forecast value added analysis. Compare the effectiveness of your forecasting efforts to a placebo -- the random walk forecast. If you aren't doing any better than that, you have some apologizing to do.

Share

About Author

Mike Gilliland

Product Marketing Manager

Michael Gilliland is a longtime business forecasting practitioner and formerly a Product Marketing Manager for SAS Forecasting. He is on the Board of Directors of the International Institute of Forecasters, and is Associate Editor of their practitioner journal Foresight: The International Journal of Applied Forecasting. Mike is author of The Business Forecasting Deal (Wiley, 2010) and former editor of the free e-book Forecasting with SAS: Special Collection (SAS Press, 2020). He is principal editor of Business Forecasting: Practical Problems and Solutions (Wiley, 2015) and Business Forecasting: The Emerging Role of Artificial Intelligence and Machine Learning (Wiley, 2021). In 2017 Mike received the Institute of Business Forecasting's Lifetime Achievement Award. In 2021 his paper "FVA: A Reality Check on Forecasting Practices" was inducted into the Foresight Hall of Fame. Mike initiated The Business Forecasting Deal blog in 2009 to help expose the seamy underbelly of forecasting practice, and to provide practical solutions to its most vexing problems.

4 Comments

  1. Nice post. Measuring the incremental value of forecasting activity over simplistic "naive" models is not a new idea though, it's not difficult and it's really is the only way to know if your forecast activities are adding value. In my experience though relatively few groups actually do it. Is that what everyone else sees? Why would that be?

    • Hi Andrew, thanks for the comment. While the importance of relative performance (e.g. comparing a medicine against a placebo) has been recognized since the dawn of science, it has been a struggle getting it recognized in business forecasting. This is particularly unfortunate since a naive model is perfectly suited to fill the role of the placebo for making performance comparisons.

      As for reasons, is basic scientific method not being taught in business schools? That's all that FVA is -- the application of basic scientific method -- to determine whether our forecasting efforts are having an effect.

      At least the number of forecasting practitioners using this approach appears to be increasing. In recent years we've seen many companies talk about this subject publicly (including Newell Rubbermaid, Nestle, Cisco, Intel, RadioShack, AstraZeneca, Yokohama Tire (Canada), Amway, ...)

    • David B Teague on

      In my experience the local weather prognosticators :) here in the Asheville NC area tend to exaggerate the severity of the weather but they get their forecasts pretty well right qualitatively, and much better than this method:

      Look out the window. Note the weather. What you see is the forecast for tomorrow.
      That succeeds better than half the time.

      They beat that by a good bit.

      • Mike Gilliland
        Mike Gilliland on

        This all reminds me of the Curb Your Enthusiasm episode where Larry accuses the weatherman of falsely predicting rain, so all the golf club members will stay home and the weatherman can have the course to himself.

        Perhaps this isn't so far fetched. In the book Superforecasting, Tetlock and Gardner point out that self-interest, not just accuracy, can be motivation for any forecaster. The classic example is asking sales people their forecast in order to set the sales quota. It seems likely that the self-interest of having a lower quota will trump any motivation for a more accurate forecast.

Back to Top