Sports provide us with many familiar clichés about playing defense, such as:
- Defense wins championships.
- The best defense is a good offense.
Or my favorite:
- The best defense is the one that ranks first statistically in overall defensive performance, after controlling for the quality of the offenses it has faced.
Perhaps not the sort of thing you hear from noted scholars of the game like Charles Barkley, Dickie V, or the multiply-concussed crew of Fox NFL announcers. But it captures the essential fact that performance evaluation, when done in isolation, may lead to improper conclusions. (A team that plays a weak schedule should have better defensive statistics than one that plays only against championship caliber teams.)
Likewise, when we evaluate forecasting performance, we can't look simply at the MAPE (or other traditional metric) that is being used. We have to look at the difficulty of the forecasting task, and judge performance relative to the difficulty.
It is possible to characterize forecasting efforts as either offensive or defensive.
Offensive efforts are the things we do to extract every last bit of accuracy we can hope to achieve. This includes gathering more data, building more sophisticated models, and incorporating more human inputs into the process.
Doing these things will certainly add cost to the forecasting process. The hope is that they will make the forecast more accurate and less biased. (Just be aware, by a curious quirk of nature, that complexity may be contrary to improved accuracy, as the forthcoming Green & Armstrong article "Simple versus complex forecasting: The evidence" discusses.)
Heroic efforts may be justified for important, high-value forecasts, that have significant impact on overall company success. But for most things we forecast, it is sufficient to come up with a number that is "good enough" to make a planning decision. An extra percentage point or two of forecast accuracy -- even if it could be achieved -- just isn't worth the effort.
A defensive forecaster is not so much concerned with how good a forecast can be, but rather, with avoiding how bad a forecast can be.
Defensive forecasters recognize that most organizations fail to achieve the best possible forecasts. And many organizations actually forecast worse than doing nothing (and instead just using the latest observation as the forecast). As Steve Morlidge reported in Foresight, 52% of the forecasts in his study sample failed to improve upon the naïve model. So more than half the time, these organizations were spending resources just to make the forecast worse.
The defensive forecaster can use FVA analysis to identify those forecast process steps that are failing to improve the forecast. The primary objective is to weed out wasted efforts, to stop making the forecast worse, and to forecast at least as well as the naïve model.
Once the organization is forecasting at least as well as the naïve model, then it is time to hand matters back over to the offensive forecasters -- to extract every last percent of accuracy that is possible.