Q&A with Steve Morlidge of CatchBull (Part 1)

3

In a pair of articles published in Foresight, and in his SAS/Foresight webinar "Avoidability of Forecast Error" last November, Steve Morlidge of CatchBull laid out a compelling new approach on the subject of "forecastability."

It is generally agreed that the naive model (i.e. random walk or "no change" model) provides the "worst case" for how your forecasting process should perform. With the naive model, your last observed value becomes your forecast for the future. (So if you sold 100 units last week your forecast for this week is 100. If you sell 150 this week, your forecast for next week becomes 150, and so on.)

The naive model generates forecasts with essentially no effort and no cost. So if your forecasting process forecasts worse than the naive model, something is terribly wrong. You might need to stop whatever it is you are doing and just use the naive model!

While the naive model provides what should be the "worst case" forecasting performance, a more difficult question is what is the best case? What is the best forecast accuracy we can reasonably expect for a given demand pattern? In other words, what forecast error is avoidable? I spelled out Steve's argument in a four part blog series last summer (Part 1, Part 2, Part 3, Part 4), and you can watch his webinar on-demand. He has also published a new article appearing in the Spring 2014 issue of Foresight.

In response to several questions we received about his material, Steve has graciously provided written answers which we'll share over a new series of posts.

Q&A with Steve Morlidge of CatchBull

Q: ­How does this naive forecast error work if your historic data has constant seasonality?­

A: In theory it is possible to achieve lower RAE (Relative Absolute Error) the greater the change in the signal from period to period. But in practice – usually – the more changeable the signal the more difficult it is to forecast. For this reason we find it is difficult to beat an RAE of 0.5 and it is very difficult to consistently beat 0.7.

The one exception to this general rule is seasonality. This is an example of a change in the signal which is often relatively easy to forecast. For this reason the RAE score of businesses which are predictably seasonal in nature often have an average RAE that is marginally better than those for other business. Examples of this are businesses who sell more around Christmas and other public holidays. A business which sells ice cream, for instance, is clearly seasonal but their seasonality is not predictable and so we wouldn’t expect these to achieve better score than the norm.

Despite average RAE for businesses with predictably seasonal businesses sometimes being better than the norm they usually still have a very high proportion of their portfolio with RAE in excess of 1.0.

As a result I believe the RAE metric is valid and useful even for seasonal businesses but it may be that for those products that are predictably seasonal your RAE targets should be slightly more stretching – perhaps by +/- 0.2 RAE points.

Q: ­What is a good test in Excel to determine if a data series is a random walk?­

A: If a data series approximates a random walk it is impossible to forecast in the conventional sense; the naïve (same as last period) forecast is the optimal forecast. It is likely therefore that many forecasters are wasting a lot of time and energy trying to forecast the unforecastable and destroying value in the process.

It is very difficult to spot the existence of a random walk however; it is difficult to distinguish signal from noise and very often a random walk can look like it a trend. For instance stock market price movements are very close to a random walk but there is an industry of chartists that believe they can detect patterns in the data and make predictions based on them.

Randomness is a difficult concept from a mathematical point of view – it is simply the absence of pattern. It is impossible to prove that a data sequence is random – you can only state that you can you cannot find a pattern; and there are potentially an infinite amount of patterns.

From a practical point of view the best thing to do is to compare the naïve forecast error (the ‘same as last period’ or ‘naïve 1’ method) to that from a handful of simple forecast processes; simple smoothing with and without a trend and perhaps a naïve forecast based on prior year actuals (‘naive 2’) as a simple seasonal forecasting method. If all these fail to beat the naïve forecast there is a reasonable chance that series is ‘unforecastable’ from a practical point of view and the best strategy might be to use the naïve particularly if the items is a small.

Share

About Author

Mike Gilliland

Product Marketing Manager

Michael Gilliland is a longtime business forecasting practitioner and formerly a Product Marketing Manager for SAS Forecasting. He is on the Board of Directors of the International Institute of Forecasters, and is Associate Editor of their practitioner journal Foresight: The International Journal of Applied Forecasting. Mike is author of The Business Forecasting Deal (Wiley, 2010) and former editor of the free e-book Forecasting with SAS: Special Collection (SAS Press, 2020). He is principal editor of Business Forecasting: Practical Problems and Solutions (Wiley, 2015) and Business Forecasting: The Emerging Role of Artificial Intelligence and Machine Learning (Wiley, 2021). In 2017 Mike received the Institute of Business Forecasting's Lifetime Achievement Award. In 2021 his paper "FVA: A Reality Check on Forecasting Practices" was inducted into the Foresight Hall of Fame. Mike initiated The Business Forecasting Deal blog in 2009 to help expose the seamy underbelly of forecasting practice, and to provide practical solutions to its most vexing problems.

3 Comments

  1. Pingback: FVA interview with Steve Morlidge - The Business Forecasting Deal

  2. Pingback: FVA interview with Shaun Snapp - The Business Forecasting Deal

Back to Top