Forecast Value Added Q&A (Part 3)

0

With this Q&A Part 3, we are about halfway through the questions submitted during the FVA webinar. We did over 15 minutes of live Q&A at the end of the webinar, and covered many of the submitted questions at that time, however I always prefer to issue complete written responses to all questions. My reasoning is onefold:

Unlike this generation's great orators, such as Miss Utah 2013, Miss Teen South Carolina 2007, or Texas Gov. Rick Perry*, I tend to be somewhat rambling, incoherent, and incomplete in my live responses.

Therefore, let the written responses continue...

Q: Will the webinar be available for replay?

Q: Is it possible to get the webex recording?

Here is the 28-minute webinar recording "Forecast Value Added: A Reality Check on Forecasting Practices"

Q: Is the MAPE presented here [in FVA reports]using a holdout period? Or calculated by back fitting the original series?

Q: ­It seems like this approach is to minimize the value of MAPE (the holdout period). What if the future may not necessary be consistent with respect to the holdout?­

FVA analysis compares the forecasting performance of a naive model to the performance of your various process steps (computer generated "statistical forecast", analyst forecast, consensus or collaborative forecast, executive approved forecast, etc.). As such it is looking at the "actual" value in a period (e.g. actual sales, actual revenue, actual number of calls received, actual amount of insurance claims, or whatever else it is you are forecasting) compared to what had been forecast in that period by the naive model and various process steps. FVA analysis is conducted after the "actuals" are known. So concepts like holdout periods and back-fitting are not applicable to FVA analysis.

Q: Can we use FVA if the data series available is short?

Yes, there is no minimum requirement for number of periods of data for calculating the FVA metric. For example, for a new product, you can calculate the FVA metric as soon as you have one period of actuals. However, I would warn against drawing any conclusions or taking any process tuning actions with such limited data. For a single period, or for short periods of time, the observed FVA may just be due to chance, and so you can't draw definite conclusions about whether a process step is adding or taking away value. You need to have sufficient evidence that the observed FVA is a "real" difference, not just due to chance.

Q: ­What time frame is enough to base definitive conclusions of accuracy across different series?

Short answer: It depends on the magnitude and consistency of the observed differences.

It is helpful to think back to science class, and the notion of a null hypothesis. If we take a scientific approach to evaluating the effectiveness of our forecasting process, we should begin with the null hypothesis that our forecasting process has no effect. That is, we assume that all our efforts are resulting in a forecast that is not discernably better (or worse) than just using a naive model.

By collecting data -- the forecasts created by various steps in our process (statistical model, analyst override, etc.) along with the "actuals" we were trying to forecast -- we can determine whether there is sufficient evidence to reject the null hypothesis. Further exploration of this topic, and the relevant statistical tests that can be applied, will be subject of a future blog post.­

Q: ­What other statistic (other than MAPE) is useful to compare across models for FVA analysis?

Q: ­You mentioned MAPE.  Does WMAPE work the same in FVA?­

There are dozens of forecasting performance metrics available (SAS Forecast Server has 47 built in), so it is really up to your personal preference which one to use for your FVA analysis. Most commonly used are MAPE or one of its various flavors (like Weighted MAPE or Symmetric MAPE), or some form of Forecast Accuracy (which I've seen computed in several different ways). Bias (whether your forecast is chronically too high or too low) is another good one to use.

-----------------------

*A quick search of YouTube will yield plenty of examples of Gov. Perry's Cicero-caliber oratory.

­

Share

About Author

Mike Gilliland

Product Marketing Manager

Michael Gilliland is a longtime business forecasting practitioner and formerly a Product Marketing Manager for SAS Forecasting. He is on the Board of Directors of the International Institute of Forecasters, and is Associate Editor of their practitioner journal Foresight: The International Journal of Applied Forecasting. Mike is author of The Business Forecasting Deal (Wiley, 2010) and former editor of the free e-book Forecasting with SAS: Special Collection (SAS Press, 2020). He is principal editor of Business Forecasting: Practical Problems and Solutions (Wiley, 2015) and Business Forecasting: The Emerging Role of Artificial Intelligence and Machine Learning (Wiley, 2021). In 2017 Mike received the Institute of Business Forecasting's Lifetime Achievement Award. In 2021 his paper "FVA: A Reality Check on Forecasting Practices" was inducted into the Foresight Hall of Fame. Mike initiated The Business Forecasting Deal blog in 2009 to help expose the seamy underbelly of forecasting practice, and to provide practical solutions to its most vexing problems.

Comments are closed.

Back to Top