Changing the paradigm for business forecasting (Part 6 of 12)

0

Why the Attraction for the Offensive Paradigm?

In addition to the reasons provided by Green and Armstrong, I'd like to add one more reason for the lure of complexity:

  • You can always add complexity to a model to better fit the history.

In fact, you can always create a model that fits the time series history perfectly. But exceptional fit to history is no reason to believe a model is appropriate for forecasting the future.

Four alternative models
Four Alternative Models

Closely fitting a model to history is one of the dirty tricks of selling forecasting services or software -- getting the client to think that a close fit to history (which is easy to do) is proof of a good forecasting model. But it isn’t. While fit to history is a relevant consideration, it shouldn’t be the sole consideration in model selection. Consider this example:

There are four historical data points, with sales of 5, 6, 4, and 7 units. To forecast future sales, we build four models that progressively improve the fit to history, including a perfect fit to history.

Which model should we select to generate forecasts? I'd argue that the two best fitting models are the least appropriate -- the forecasts they generate are extremely optimistic. In the absence of any other information, only the two worst fitting models look reasonable.

The New Defensive Paradigm for Business Forecasting

Hopefully it is not going to take 100 years to make the shift, but I want to propose a new “Defensive” paradigm for business forecasting.

I’m talking about “defensive” in the sense of “playing defense” in sports – where you are trying to prevent bad things from happening, like your opponent scoring. This isn't the psychological / emotional sense of the word – although we sometimes have to get defensive and emotional in justifying our forecasts.

One of the linchpins of the new Defensive paradigm is that there is much less interest in forecast accuracy in itself. It is recognized that the accuracy you achieve is limited by the nature of the behavior you are forecasting, its “forecastability.” So instead of focusing on the level of accuracy itself, you focus on whether you achieve a level of accuracy that is “reasonable to expect” given the nature of what you are forecasting.

Under the Defensive paradigm a statement such as “I achieved a MAPE of 20%” is not very interesting or useful.

Under the new paradigm, the forecaster is more concerned about their performance relative to simpler and cheaper alternative forecasting methods, and to benchmarks like a naïve model.

Role of the Naive Model

The random walk or  “no-change” model is generally accepted as the ultimate point of comparison. The no-change model uses your latest observation as the forecast for future values. So if you sold 100 units last month your forecast for this month is 100. And so on.

The no-change model is generally accepted as the upper bound on the forecast error you should be achieving.

It is the do nothing forecast. It can be computed with virtually no effort or cost – it is essentially a free forecasting method. As such it provides the worst case – the accuracy you can achieve by doing nothing (and just using the latest observation).

The question is: If you are spending time and money with a forecasting process that performs worse than the naïve model…why bother?

The Objective

While the Offensive paradigm is about trying to do more, the Defensive paradigm is about trying to do less. It sees the objective of forecasting as to generate forecasts as accurate as can reasonably be expected (given the nature of what you’re trying to forecast)…and do this as efficiently as possible.

The Defensive paradigm acknowledges the limits of what forecasting can deliver – and recognizes the foolishness of unreasonable accuracy expectations. For example,

Suppose you work for some strange company in the business of flipping FAIR coins. Your job is to forecast Heads or Tails for each daily flip, and over a long career you’ve forecasted correctly just about 50% of the time. You get a new boss who insists you increase your forecast accuracy to 60%.

So what do you do?

You get fired – because other than a lucky streak now and then, a long term average of 50% is the best, in fact the ONLY level of accuracy you can achieve given the nature of what you are trying to forecast. Any effort to try to improve your forecast is just a waste of time.

[See all 12 posts in the business forecasting paradigms series.]

Share

About Author

Mike Gilliland

Product Marketing Manager

Michael Gilliland is a longtime business forecasting practitioner and formerly a Product Marketing Manager for SAS Forecasting. He is on the Board of Directors of the International Institute of Forecasters, and is Associate Editor of their practitioner journal Foresight: The International Journal of Applied Forecasting. Mike is author of The Business Forecasting Deal (Wiley, 2010) and former editor of the free e-book Forecasting with SAS: Special Collection (SAS Press, 2020). He is principal editor of Business Forecasting: Practical Problems and Solutions (Wiley, 2015) and Business Forecasting: The Emerging Role of Artificial Intelligence and Machine Learning (Wiley, 2021). In 2017 Mike received the Institute of Business Forecasting's Lifetime Achievement Award. In 2021 his paper "FVA: A Reality Check on Forecasting Practices" was inducted into the Foresight Hall of Fame. Mike initiated The Business Forecasting Deal blog in 2009 to help expose the seamy underbelly of forecasting practice, and to provide practical solutions to its most vexing problems.

Comments are closed.

Back to Top