Forecast Value Added Q&A (Part 4)

2

Q: ­What is a legitimate goal to expect from your FVA...5%, 10%?

Q: ­How do we set Target FVA which Forecasters can drive towards?­

The appropriate goal is to do no worse than a naive model, that is FVA ≥ 0. Sometimes, especially over short periods of time, you may observe FVA < 0 even though the methods are good and over longer periods the FVA will turn out positive. So don't start handing out pink slips to your forecasters just because you see lousy FVA after a few data points.

Also be aware that sometimes the random walk IS the appropriate forecast model, and that more sophisticated statistical models will not do any better.

I'm reluctant to suggest more specific numerical values of FVA. The fundamental problem with performance objectives is that if they are set too high (i.e. are unachievable), people just give up or find a way to cheat. And if they are set too low (so easily attained) they can encourage everyone to be lazy. (Beating a naive model sounds like it should be an easy goal, but is often more difficult than it appears!)

Determining the best forecast you can expect to achieve (and therefore the FVA you can achieve) is a difficult problem. Forecastability has been subject of considerable discussion in the Foresight journal in recent years. (See for example, the Spring 2009 special issue on assessing forecastability, Sean Schubert's article on forecastability DNA in the Summer 2012 issue, or the forthcoming article on the avoidability of forecast error by Steve Morlidge in the Summer 2013 issue. I'll be doing a blog post on Steve's article very soon.)

Q: ­You spoke recently about the rolling of demand and the bias it causes. The reason businesses do this is to preserve the quarterly revenue goal. How would you balance this goal of revenue preservation vs. not biasing the forecast?

As long as it works, I have no objection to employing the "hold and roll" to apply early-in-the-quarter misses to the forecasts in the remaining periods of the quarter. So if your organization is able put sales and marketing programs in place to address the early-in-the-quarter misses (e.g. reduce prices or increase advertising if the early periods are low, or perhaps increase prices to stifle demand if the early periods are high), then that is great.

I simply want to encourage you to measure how well the "hold and roll" works for you. Just rolling a miss into future periods (to maintain the quarterly goal) and expecting the increased demand to happen may not be a great idea. However, if you use the early miss to change your demand generation practices (e.g. increase advertising), then it may work fine for you.

Just be aware of the dangers like excess inventory (or unfilled orders) of chronically over- (or under-) forecasting.

Q: ­An example of naive forecast could be the volume of the month times the weight of the week looking last year same month?­

This sounds like a variation of the seasonal random walk. However, instead of using the "same week last year" as your forecast for this year, you are generating a forecast by month for this year and then using last year's weekly splits for each month to split into weeks. While this isn't one of the "traditional" naive models, I would characterize it as a very simple model, and many organizations use very simple models to compare to. I still would suggest using the random walk as the ultimate baseline for comparision (to see how much better (or worse) your simple model is doing.)

Q: ­How would you approach a mix of long lead time products and short lead time? Would you choose to separate them somehow when measuring FVA?

In designing your forecasting process, it can make very good sense to segment long lead time vs. short lead time products. You may want to use different models and an entirely different process for each segment. (In your software, you could separate them into different branches of your forecasting hierarchy.)

This could also make a difference in evaluating forecasting performance. We usually measure performance of the forecast that was "locked" at the lead time. So if you have products manufactured overseas with a four month lead time, it would make sense to measure performance of the forecast that was made four months prior. For products with a lead time of one week, you would typically measure performance of the forecast as of one week prior.

FVA calculations should be consistent with this. In particular, the naive forecast should be based on the lead time (so the random walk would utilize the "actual" value four months prior for the long lead time products, and one week out for the short lead time products, in this example.)

Share

About Author

Mike Gilliland

Product Marketing Manager

Michael Gilliland is a longtime business forecasting practitioner and formerly a Product Marketing Manager for SAS Forecasting. He is on the Board of Directors of the International Institute of Forecasters, and is Associate Editor of their practitioner journal Foresight: The International Journal of Applied Forecasting. Mike is author of The Business Forecasting Deal (Wiley, 2010) and former editor of the free e-book Forecasting with SAS: Special Collection (SAS Press, 2020). He is principal editor of Business Forecasting: Practical Problems and Solutions (Wiley, 2015) and Business Forecasting: The Emerging Role of Artificial Intelligence and Machine Learning (Wiley, 2021). In 2017 Mike received the Institute of Business Forecasting's Lifetime Achievement Award. In 2021 his paper "FVA: A Reality Check on Forecasting Practices" was inducted into the Foresight Hall of Fame. Mike initiated The Business Forecasting Deal blog in 2009 to help expose the seamy underbelly of forecasting practice, and to provide practical solutions to its most vexing problems.

Back to Top