High on complexity

1

Paul Goodwin's Hot New Research column is a must-read in each issue of Foresight. The current column, "High on Complexity, Low on Evidence: Are Advanced Forecasting Methods Always as Good as They Seem?" ends with this sage advice:

If the name of a method contains more words than the number of observations that were used to test it, then it's wise to put any plans to adopt the method on hold.

In professional forecasting circles, it has long been recognized that fancier methods are no guarantee of better forecasts (even though fancier methods can often provide a better fit to history!).  As Goodwin notes, "...an improved fit to past data may be associated with poorer forecasts because the method is falsely seeing systematic patterns in the random movements in past data, and assuming that these will continue into the future."

The main point of Goodwin's column is that forecasting researchers -- those publishing articles touting advanced new methods -- should provide reliable evidence that the methods they advocate will actually work. The accuracy of the new method should be compared to the accuracy of appropriate benchmarks (typically simpler methods), to see if the extra complexity of the new method is justified.

Goodwin cites Len Tashman's 2000 article ("Out-of-sample tests of forecast accuracy: An analysis and review," International Journal of Forecasting, 16, 437-450), arguing that forecasting methods need to be tested on a sufficient number of out-of-sample observations. He points to several recently published examples "where grandiose claims of forecasting performance rest on scant evidence, where "big conclusions" are drawn from "small amounts of data."

New forecasting ideas are great -- the bigger and crazier the better. But any new forecasting method should be evaluated (just like we would a new drug or therapy) starting with a null hypothesis:

H0: The new method has no effect on forecasting performance

Without sufficient evidence to reject the null hypotheses, implementing the method might just be wasting our time.

 

Share

About Author

Mike Gilliland

Product Marketing Manager

Michael Gilliland is a longtime business forecasting practitioner and formerly a Product Marketing Manager for SAS Forecasting. He is on the Board of Directors of the International Institute of Forecasters, and is Associate Editor of their practitioner journal Foresight: The International Journal of Applied Forecasting. Mike is author of The Business Forecasting Deal (Wiley, 2010) and former editor of the free e-book Forecasting with SAS: Special Collection (SAS Press, 2020). He is principal editor of Business Forecasting: Practical Problems and Solutions (Wiley, 2015) and Business Forecasting: The Emerging Role of Artificial Intelligence and Machine Learning (Wiley, 2021). In 2017 Mike received the Institute of Business Forecasting's Lifetime Achievement Award. In 2021 his paper "FVA: A Reality Check on Forecasting Practices" was inducted into the Foresight Hall of Fame. Mike initiated The Business Forecasting Deal blog in 2009 to help expose the seamy underbelly of forecasting practice, and to provide practical solutions to its most vexing problems.

Back to Top