Please enjoy a much-needed break from FVA Q&A with editor Len Tashman's preview of the Summer 2013 issue of Foresight:
Enlightenment has been our guiding principle through this, our 30th issue of Foresight. Since the journal’s inception in 2005, our mission has been to help the forecasting profession come to grips with the challenges of prediction: foretelling events that have yet to occur, extrapolating historical patterns to obtain baseline forecasts, and making effective use of technology – data, tools, systems, business intelligence – in aiding human judgment. The Foresight logo symbolizes the journal’s – and our profession’s – attempt to “light the path to the future.”
One of the most basic of our forecasting challenges is a question that is poorly understood, difficult to resolve, and, as a consequence, often ignored. This is the issue of forecastability, or the potential forecasting accuracy of an item, product, or revenue flow. What is the best accuracy we can hope to achieve? If we can know this, we would have a benchmark to judge how effective our current efforts are and how much room remains for improvement.
In the past four years, Foresight has published a half-dozen articles on this subject, presentations that have broken new ground in search of useful benchmarks of forecastability. New metrics have been proposed (and old ones criticized), new analytical approaches have been offered, and many new perspectives have emerged. But still the search continues for what we might call an “industry standard”: a protocol for assessing forecastability.
In this pursuit, Steve Morlidge has prepared our feature article: How Good Is a "Good" Forecast?: Forecast Errors and Their Avoidability. Steve reviews the basic forecastability concepts, summarizes the innovations from our earlier Foresight articles, and proposes a simple, logical, and supportable metric to serve as the forecastability benchmark. It is valuable progress and a promising basis for further work on the subject.
A second ponderable: To what extent is the resulting accuracy of a forecast a matter of skill rather than luck? A sensible and appropriate forecasting method may lead to a poor forecast as a result of an unexpected environmental change; likewise, a poorly conceived method may luck out, at least on occasion. Distinguishing skill from luck is of fundamental importance, and is the subject of the provocative new book by Michael J. Mauboussin, The Success Equation: Untangling Skill and Luck in Business, Sports, and Investing. Our equally provocative reviewers Roy Batchelor and Sean Schubert distill the author’s arguments from the respective viewpoints of a financial-forecasting scholar and business-operations practitioner.
The topic of this issue’s Forecast Methods Tutorial is ARIMA: The Models of Box and Jenkins. These tutorials are designed to provide nontechnical overviews of important methodologies, enabling forecasters to make more informed use of their software solutions.
Paul Goodwin’s Hot New Research column addresses the importance of weather-related variables in business-forecasting models. Come Rain or Shine: Better Forecasts for All Seasons summarizes three recent studies about how weather forecasts should be used and the extent to which they can improve forecast accuracy.
Our Forecasting Intelligence section seeks to bring new sources of useful data to the attention of business forecasters. In this issue, Torsten Schmidt and Simeon Vosen of the economic research institute RWI Essen provide an introduction to the use of product-search data and show that we can significantly improve our Forecasting of Consumer Purchases Using Google Trends.
The Summer 2013 issue concludes with Jim Hoover’s book review of Supply Chain Forecasting Software by Shaun Snapp. The book, says Jim, covers areas not typically seen in either the forecasting or the supply-chain literature – namely, how the major ERP forecasting tools work, and how to make them work better.
In the Fall 2012 issue of Foresight, Stephan Kolassa reviewed the promising, new online textbook Forecasting: Principles and Practice by Rob J. Hyndman & George Athanasopoulos. At the time, the book was only partially completed. It is now finished, and includes additional sections on judgmental forecasting, neural networks, and hierarchical forecasting. Rob and George have also added exercises and suggested further reading at the end of each chapter. Head over to otexts.com/fpp to check it out.
While the book is finally complete, the authors promise that more is to come – including a print version later this year, an online forum for students using the book, and the ability to add personalized notes and highlights. Additionally, Rob and George encourage instructors who adopt the text in their courses to share materials on the website, including slides, exercises, etc.