Of course, forecasting the stock market is not perfectly analogous to forecasting demand for a product. The asking price for a stock is largely "anchored" by the price of its most recent trades. While market values may appear to randomly drift up and down, or in a general direction, we generally don't see seasonality or other consistent behavior like we do with the demand for many products.
While we can never assume to generate perfect forecasts, the task remains to forecast as well as can reasonably be expected (given the nature of the demand), and to do this as efficiently as possible. Ways to do this were examined in another Analytics2011 presentation.
Michael Tramnitzke of Beiersdorf AG, on Using Forecasting Resources Most Effectively
Michael discussed the challenges of forecasting at a multinational fast-moving consumer goods company (makers of products like Nivea skin care). Upgrading the analytical talent of the forecasters was difficult. Even with a structured process for training, it was hard for the forecasters to find time to study and prepare their assignments, and with frequent job turnover there were always new people to train.
They found that many demand patterns were not amenable to statistical forecasting, there was too much overriding and overfitting of models, and that sometimes it was difficult to explain (i.e. "sell") the forecast to management. (This is expecially true when the most appropriate forecasting model is a flat line, and it looks like the forecaster is just being lazy by choosing it.)
In order to most effectively use its forecasting resources, Michael showed how Beiersdorf classified its items according to importance (commonly used ABC analysis), and forecastability. Forecastability was rated X, Y, or Z (with X being easiest to forecast) and was based on an item's coefficient of variation (how volatile was its demand pattern), and how well it could be forecast with a seasonal random walk model.
Despite its well recognized weaknesses, I have long advocated coefficient of variation (CV) as the single best indicator of an item's forecastability. (Just look at the Rob Miller's comet chart for an illustration of the usual relationship.) However, CV can fail as a good indicator of forecastability when a product is highly seasonal, with big swings up and down during the year, but when those swings are consistent and predictable. (Such products have high CV but low forecast error.) By adding the performance of a seasonal random walk model (e.g., using the same period from a year ago as your forecast), Michael improved the criteria for forecastability classification.
Segmentation turned out to be the key to efficient and value adding use of forecasting resources. For the X (easy to forecast) items, these could be forecast adequately well by an exponential smoothing model. For volatile A and B (i.e. more important) items, manual overrides could add value to the system generated forecast. For the less important C items, it was probably not worth extra effort trying to improve their forecasts.
Another key finding was that reducing the complexity of model choices could improve accuracy, and use business rules (such as the ABC / XYZ classifications) to guide where to apply what method. Also, provide advanced analytics to support the human experts.
Analytics2012
Mark your calendar for the next two events in the Analytics conference series:
- Analytics 2012 in Cologne, Germany (June 14-15, 2012)
- Analytics 2012 in Las Vegas, NV USA (October 8-9, 2012)