NPR joins the unforecastability bandwagon

6

Today my colleague Alison Bolen, Editor of sascom magazine, sent me this link to an interesting piece on NPR: "Can Economic Forecasting Predict The Future?" In a somewhat lighthearted take on the inability of our economists to predict the future -- or even precisely report the past for that matter -- we hear about the modeling efforts of "one of the leading economic forecasters in America." These efforts revolve around 447 pages of math equations. Do I hear "**** DANGER WILL ROBINSON DANGER ****" ringing in my ears?

One sure warning sign that you aren't going to be able to forecast something worth a darn is when it takes 447 pages of equations to model it. Makridakis, Hogarth, and Gaba state it nicely in their recent International Journal of Forecasting article, "Decision making and planning under low levels of predictability":

…a huge body of empirical evidence has led to the following conclusions.
• The future is never exactly like the past. This means that the extrapolation of past patterns or relationships cannot provide accurate predictions.
• Statistically sophisticated, or complex, models fit past data well but do not necessarily predict the future accurately.
• “Simple” models do not necessarily fit past data well but predict the future better than complex or sophisticated models.

Happy New Year.

Tags
Share

About Author

Mike Gilliland

Product Marketing Manager

Michael Gilliland is a longtime business forecasting practitioner and formerly a Product Marketing Manager for SAS Forecasting. He is on the Board of Directors of the International Institute of Forecasters, and is Associate Editor of their practitioner journal Foresight: The International Journal of Applied Forecasting. Mike is author of The Business Forecasting Deal (Wiley, 2010) and former editor of the free e-book Forecasting with SAS: Special Collection (SAS Press, 2020). He is principal editor of Business Forecasting: Practical Problems and Solutions (Wiley, 2015) and Business Forecasting: The Emerging Role of Artificial Intelligence and Machine Learning (Wiley, 2021). In 2017 Mike received the Institute of Business Forecasting's Lifetime Achievement Award. In 2021 his paper "FVA: A Reality Check on Forecasting Practices" was inducted into the Foresight Hall of Fame. Mike initiated The Business Forecasting Deal blog in 2009 to help expose the seamy underbelly of forecasting practice, and to provide practical solutions to its most vexing problems.

6 Comments

  1. Prediction of this current "Great Recession" was forcast by several leading economists, namely Dr. Paul Krugman and Dr. Joseph Stiglitz, both Nobel Prize winners. They used data from the 1920's for their analysis. Their predictions were ignored by Wall Street and the press.
    I think you might want to reconsider your thesis. History does repeat itself.

  2. That's a fair point, but how many other predictions have they made over the years that didn't come true? Without knowing that, it's impossible to know whether they were just lucky on this one or whether history really does empirically and consistently repeat. Selection bias can often play a big role when assessing the accuracy of any forecasting process or methodology, regardless of whether one's talking about macroeconomic prediction or inventory management - if one only ever looks at the correct predictions and ignores the inaccurate ones, it's very easy (and tempting) to select a model that's extremely poor on average and discount the incorrect predictions because one doesn't like them.
    On an aside, while I'm not suggesting that Krugman or Stiglitz are guilty of this, the easiest way to always be right as an economist is to never give a timeframe for your predictions to come true - in the long run, you'll probably eventually be right, especially if you pick up on fundamental inequalities or imbalances. But, as Keynes said, in the long run, we're all dead ...

  3. I can't think of any specific predictions where they were wrong, can you? If you can't come up with a list of incorrect forecasts, then all you have done is to introduce a red herring into this conversation. The Nobel Prize isn't given out to people that are mostly incorrect. To educate yourself a little, start with Krugman's book "The Return of Depression Economics", published in 2000. Then you can start reading more detailed analysis on his current predictions on his daily blog.
    By the way, I don't think that this current crisis is over, you might want to take a peak at http://www.occ.gov/ftp/release/2009-161a.pdf
    Of course, I do agree with your point about model selection. That's where model training becomes an invaluable tool, and SAS EM is excellent at that.

  4. I think you've highlighted one of the biggest problems in forecasting generally - trying to create a definitive list of all predictions made in order to make an objective judgement as to whether the forecasts, in aggregate, are more right than they are wrong. Regarding Krugman, Stiglitz, Schiff, or any of the other analysts who correctly picked the downturn, the only way to really assess their ability to create accurate predictions would be to go through every single statement they've ever made and tally up the number of accurate vs. inaccurate predictions.
    That's obviously a non-trivial task, especially given their extensive publishing and interview history. It becomes even more complicated when you look at economists like Keen in Australia, who's been predicting a correction for quite a while; eventually, he may well be right, but if his forecasts take five years to eventuate rather than two months, were they still 'correct'? If I had the time, I'd love to look at 'accuracy' and 'time to accurate prediction' for a number of major macroeconomists - it'd make for a great research paper!
    As time-consuming as that relatively simple process would be, I think it's often worse in many organisations - with planning cycles generally locked to rolling monthly windows or annual budgeting cycles, it's often near impossible to review what forecasts were made as recently as three years ago! Methodologies change and people move on, typically making it extremely difficult to make an objective assessment of how significant 'forecast error' is. My personal experience is that often the hardest part of a forecasting project is simply understanding how much error there is in the first place!
    As an aside, I'd be curious to know if Krugman and Stiglitz think of themselves as forecasters or analysts - my hunch is that they're more interested in understanding dynamics than they are about creating specific predictions. But, that's only a hunch; only they'd know for sure.

  5. Unless the program has a giant nural network that is able to look at a giant number of influencing factors and effectively weight up the individual factors differently for each analysis, I don't think its going to be very effective.

  6. Matt Johnson on

    "One sure warning sign that you aren't going to be able to forecast something worth a darn is when it takes 447 pages of equations to model it." I couldn't agree more.

Back to Top