Forecast Value Added Q&A (Part 2)

1

Q: Could you send me the presentation? With audio if possible.

If you'd like a pdf of the slides, email me directly: mike.gilliland@sas.com

For the audio, the webinar recording is available for free on-demand review: FVA: A Reality Check on Forecasting Practices

Q: Can we get the case study referred here from Newell Rubbermaid?

A brief version of what Newell Rubbermaid reported is in the webinar slides. A full version has been presented by Sean Schubert and/or Ryan Rickard at various conferences including IBF (May 2011), INFORMS (November 2011), and IBF (October 2012).

Sean and Ryan were also interviewed in SupplyChainBrain. You can view their video "Implementing Forecast Value-Added Analysis" on SupplyChainBrain.com (requires registration).

Q: To build a model on monthly data to capture seasonality over the year, how many months of data do you recommend as a minimum?

The usual answer is three full years (so you have 3 data points for every month, with which to compute the seasonality indices). But often we don't have the luxury of three full years of data (because the product is new, or our systems don't save that much history at the granularity of detail we need).

If you don't have three full years for an item, but have reason to believe it will generally follow the pattern of another product or group, you could utilize the seasonal indices of the other product or group. In forecasting, we should have no expectation of ever being perfect, so we shouldn't obsess about not having the perfect data. Instead, we want to find practical ways to generate a forecast that is reasonable and "in the ballpark" -- hopefully good enough to make good decisions.

Q: Could you please repeat what happens when we pile the volume not made on previous month to maintain the quarterly or fiscal year volume goal?

Suppose you forecast 10,000 units for the quarter, in monthly buckets of 3,000, 3,000, and 4,000. If the first month comes in at 2,000, what do you do next?

One argument could be that since we missed the first month badly, demand might not be as high as we originally expected, and we should therefore reduce the forecast for future months. This is not an unreasonable approach.

Another argument could be that our forecasts are imperfect, there is a lot of volatility in demand, and that selling 2,000 when we forecast 3,000 is not all that surprising and could just be due to chance. So, lacking any additional information about changes in marketplace demand, we simply leave the future forecasts unchanged. This is not an unreasonable approach.

A third approach, known as the "hold and roll," is to hold the quarterly forecast at 10,000, and shift the 1,000 unit first month miss into the second and third months. So we might change their forecasts to 3,500 and 4,500 to maintain 10,000 for the quarter. While this is not necessarily an unreasonable approach, it can prove to be unreasonable if you do a simple performance tracking exercise.

It is a common (and perfectly reasonable) assumption that our forecasts should improve as we get closer to the period being forecast. Today in June I could create a forecast for December. In July, based on the new information of June's actual, I could revise my December forecast. In August, based on the new information of July's actuals, I could again revise my December forecast. And so on.

The idea is that by the beginning of December, when I have my actuals through November (as well as all the other relevant information that has been accumulated over the last several months), I should be able to produce a December forecast that is more accurate than the December forecast I started with back in June. But what if it isn't? What if my December forecast accuracy is found to have gotten worse the closer I got to December?

In organizations where the forecast does tend to get worse the closer you get to the period being forecast, the "hold and roll" approach may be the culprit. The hold and roll forces you to do something that is counter-intuitive, which is to increase future forecasts after a miss, or reduce future forecasts when you exceed.

 

Share

About Author

Mike Gilliland

Product Marketing Manager

Michael Gilliland is a longtime business forecasting practitioner and formerly a Product Marketing Manager for SAS Forecasting. He is on the Board of Directors of the International Institute of Forecasters, and is Associate Editor of their practitioner journal Foresight: The International Journal of Applied Forecasting. Mike is author of The Business Forecasting Deal (Wiley, 2010) and former editor of the free e-book Forecasting with SAS: Special Collection (SAS Press, 2020). He is principal editor of Business Forecasting: Practical Problems and Solutions (Wiley, 2015) and Business Forecasting: The Emerging Role of Artificial Intelligence and Machine Learning (Wiley, 2021). In 2017 Mike received the Institute of Business Forecasting's Lifetime Achievement Award. In 2021 his paper "FVA: A Reality Check on Forecasting Practices" was inducted into the Foresight Hall of Fame. Mike initiated The Business Forecasting Deal blog in 2009 to help expose the seamy underbelly of forecasting practice, and to provide practical solutions to its most vexing problems.

Back to Top