Today I welcome guest blogger Len Tashman, Editor of Foresight: The International Journal of Applied Forecasting. I’ve been a big fan of Foresight since its inception in 2005, and the Spring 2009 issue contains a special feature on a topic close to my heart -- assessing forecastability. Here is Len’s preview:
Forecastability is a concept of major concern to forecasters and has been discussed at many events for forecasting practitioners. Unfortunately, it receives little attention from forecasting researchers. Assessing the forecastability of a time series can give us a basis for judging how successful we’ve been in modeling the historical data and how much improvement we can still hope to attain.
The forecastability of a time series is related to the stability (regularity, volatility) of the data: normally, the less stable a series is, the more difficult it will be to achieve a desired degree of forecast accuracy.
Peter Catt leads off our feature section with his paper, Forecastability: Insights from Physics, Graphical Decomposition, and Information Theory. The data we observe derive from an underlying time series process – the data generating process– and Peter defines and illustrates four possible processes at play, ranging from completely forecastable to essentially unforecastable. His six sample time series illustrate this range of forecastability in time-series data.
Peter then examines the utility of the coefficient of variation in distinguishing the relative
forecastability of these series and finds the metric to be unreliable. Drawing from information theory, he proposes an alternative measure of forecastability, called approximate entropy, that more truly reveals the relative forecastability of different series.
In his article, Toward a More Precise Definition of Forecastability, John Boylan distinguishes forecastability from stability, and attempts to tie forecastability more closely to forecast accuracy metrics such as the MAPE. John argues that forecastability should be measured by a band or interval in which the lower bound is the lowest error we can hope to achieve and the upper bound is the maximal error. With such a band, we could know how far we’ve come (reducing error from the upper bound) and how far we can still hope to go (to reduce error to the lower bound).
The main difficulty he observes lies in calculating a lower bound – how can we know the potential for forecasting accuracy? In general, we can’t pin this down, but we can frequently make useful approximations of the lower bound of forecast error by relating the time series to be forecast to its position in the product hierarchy, by combining forecasts, and by identifying more forecastable series.
Concluding this section is How to Assess Forecastability, Stephan Kolassa’s commentary on the Catt and Boylan papers. Stephan contrasts Catt’s preferred metric, approximate entropy, with Boylan’s lowest achievable forecast error (lower bound), seeking a practical synthesis of the two views. He also expands upon the meaning of the entropy metric and discusses key issues in the entropy calculation.
These articles are hardly the final word on the subject. They are a beginning, one designed to give the reader an appreciation of the role of forecastability assessment, the pros and cons of different metrics that have been proposed, and the practical challenges of establishing bounds for expected forecast errors. Foresight encourages you to provide feedback and alternative perspectives on the ideas offered here.
I’ve invited Len to provide a preview of each new issue of Foresight, so we’ll be hearing from him quarterly. In my next posting I’ll take Len up on his invitation to provide feedback. You can, too, by writing him at lentashman@forecasters.org.