Q&A with Steve Morlidge of CatchBull (Part 2)

0

Q: ­Do you think the forecaster should distribute forecast accuracy to stakeholders (e.g. to show how good/bad the forecast is) or do you think this will confuse stakeholders?

A: This just depends what is meant by stakeholders. And what is meant by forecast accuracy.

If stakeholders means those people who contribute to the forecast process by providing the market intelligence that drive the judgemental adjustments made to the statistical forecast the answer is a resounding ‘yes’…at least in principle. Many forecasts are plagued with bias and a common source of bias infection is overly pessimistic or (more commonly) optimistic input from those supplying ‘market intelligence’.

Also those responsible for making the decision to investment in forecasting processes and software need to know what kind of return it has generated.

But all too often impenetrable and meaningless statistics are foisted on stakeholders using measures that are difficult for laymen to interpret and provide no indication of whether the result is good or bad.

This is why I strongly recommend using a measure such as RAE which, by clearly identifying whether and where a forecast process has added value, is easy to understand and meaningful from a business perspective.

Q: ­When you say RAE needs to be calculated at the lowest level do you mean by item, or even lower such as by item shipped by plant X to customer Y?­

A: Forecasting demand for the Supply Chain, and replenishing stock based on this forecast, is only economically worthwhile if it is possible to improve on the simple strategy of holding a defined buffer (safety) stock and replenishing it to make good any withdrawals in the period.

What implications does this have for error measurement?

First, since this simple replenishment strategy is arithmetically equivalent to using a naïve forecast (assuming no stock outs), and the level of safety stock needed to meet a given service level is determined by the level of errors (all other things being equal), if a forecast has a RAE below 1.0 it means that the business needs to hold less stock.

The second is the level of granularity at which error should be measured. Since the goal is to have the right amount of stock at the right place at the right time then error should be measured at a location/unique stock items level in buckets which (as far as possible) match the frequency at which stock is replenished. Measuring error across all locations will understate effective forecast error since having the right amount of stock in the wrong place is costly. And while it might be helpful to identify the source of error if different customers are supplied from the same stock, measuring error at a customer level will overstate effective error.

Q: ­What are your thoughts on using another model for benchmarking forecast error besides the naive model?­

A: Relative Absolute Error (RAE) is a measure which compares the average absolute forecast error with that from a simple ‘same as last period’ naïve error. This approach has the advantage of simplicity and ease of interpretation. It is easy to calculate and, since the naïve forecast is the crudest forecasting method conceivable, then a failure to beat it is something that is very easy to understand – it is baaad!

But the naïve forecast is more than a mere benchmark.

The ultimate economic justification for forecasting is that it is more efficient than a simple replenishment strategy whereby stock is maintained at a constant level by making good the sales made in the prior period. A naïve forecasts is mathematically equivalent to this strategy the degree to which a forecast improves on it is a measure of how much value a forecast has added. So RAE where the naïve forecast provides the denominator in the equation is economically meaningful in a way that would not be possible if another method was chosen.

Secondly the naïve forecast error reflects the degree of period to period volatility. This means that it a good proxy measure for the forecastability of the data set and, given certain assumptions, it is possible to make theoretical inferences about the minimum level of forecast error. As a result a specific RAE provides an objective measure of how good a forecast really is in a way that is not possible if another forecast method was used to provide the denominator in the equation. In this case the result would say as much about the performance of the benchmark method as it does about the performance of the actual method…and it would be impossible to disentangle the impact of one form another.

Share

About Author

Mike Gilliland

Product Marketing Manager

Michael Gilliland is a longtime business forecasting practitioner and formerly a Product Marketing Manager for SAS Forecasting. He is on the Board of Directors of the International Institute of Forecasters, and is Associate Editor of their practitioner journal Foresight: The International Journal of Applied Forecasting. Mike is author of The Business Forecasting Deal (Wiley, 2010) and former editor of the free e-book Forecasting with SAS: Special Collection (SAS Press, 2020). He is principal editor of Business Forecasting: Practical Problems and Solutions (Wiley, 2015) and Business Forecasting: The Emerging Role of Artificial Intelligence and Machine Learning (Wiley, 2021). In 2017 Mike received the Institute of Business Forecasting's Lifetime Achievement Award. In 2021 his paper "FVA: A Reality Check on Forecasting Practices" was inducted into the Foresight Hall of Fame. Mike initiated The Business Forecasting Deal blog in 2009 to help expose the seamy underbelly of forecasting practice, and to provide practical solutions to its most vexing problems.

Comments are closed.

Back to Top