Practical First Steps #1 - The Accuracy vs. Volatility Scatterplot

10

Last week I was a guest of Gaurav Verma on the SAS Applying Business Analytics Web Series, and presented “What Management Must Know About Forecasting.” One of the most important things you can bring to management’s attention is the benefit of making your demand forecastable.

In forecasting we tend to treat demand patterns as given, as if there were nothing we could do about them. This “passive” attitude makes our performance contingent on the forecastability of the demand patterns. If the patterns are easy to forecast then we should do fine, but if they are not, we risk failing to meet our forecast accuracy objectives.

In my July 1 posting on The BFD, I used this scatterplot to illustrate the usual relationship between demand volatility (along the horizontal axis), and forecast accuracy (the vertical axis). I argued that while imperfect, the volatility of a demand pattern (as measured by its coefficient of variation) is a good indicator of how accurately we can expect to forecast that demand. As a practical first step to improving your forecasting performance, it is worthwhile to construct this scatterplot with your own data…and here’s how:

1) Determine at what level you want to do the analysis. This is typically the level at which you focus your forecasting and planning efforts. For example, a manufacturer might forecast the number of units demanded for each Item, or at each Item / Distribution Center combination. A retailer might forecast unit sales by Item / Store. An insurance company might forecast claims by Region or Postal Code. (In the above scatterplot, this manufacturer sold roughly 500 Items through 10 DCs, and forecasted at the Item / DC level. There are, therefore, roughly 5000 (= 500 x 10) points in the plot.) Suppose for this example that we are doing the analysis by Item.

2) You will need to gather the last year of data for each Item, in whatever time bucket you use (typically week or month). (I prefer using weekly data if it is available, but many organizations plan in monthly buckets, so we’ll use monthly for this example.) For each item you will need to capture the monthly Sales, and whatever the Forecast was for that month. So you will create a dataset with four variables . The four columns on the left show what this dataset might look like with the 12 monthly observations for Item XXX.

3) You next compute the volatility of sales over the past year. Volatility is measured by the coefficient of variation (CV) of the monthly sales, which is defined as the standard deviation divided by the mean. For item XXX over the past year, it had mean monthly sales of 102.5 units, and a standard deviation of 26.8. Therefore, CV = 26.8 / 102.5 = 26.2%.

4) You finally have to compute Forecast Accuracy for each Item. This requires two new computations shown in the two rightmost columns. The Absolute Error in forecasted sales each month (i.e. |Forecast – Sales|) and the Maximum of Forecast or Sales in each month. Forecast Accuracy is computed as 1 – (Sum of Absolute Errors) / (Sum of Maximums) = 1 – (150/1320) = 88.6%.

5) Create the scatterplot by plotting the Forecast Accuracy and Volatility coordinates for each Item. The Forecast Accuracy metric will always be from 0 to 100%, so the vertical axis can be scaled 0 – 100. The Volatility, however, can become indefinitely large. For clarity and usefulness of the presentation, you may want to ignore really extreme values of CV (unless there are a lot of them), and only run the horizontal axis from 0 to 150% or 200%, and footnote how many Items are not being shown. The point is to illustrate the overall volatility of your sales patterns, and to show the relationship with your ability to forecast.

As long as you have the past year of sales data, along with the last year of forecasts, it is very easy to create this scatterplot. It is a simple way to visualize the extent of volatility issues with your sales, and the likely impact on your forecasting performance.

Tags
Share

About Author

Mike Gilliland

Product Marketing Manager

Michael Gilliland is a longtime business forecasting practitioner and formerly a Product Marketing Manager for SAS Forecasting. He is on the Board of Directors of the International Institute of Forecasters, and is Associate Editor of their practitioner journal Foresight: The International Journal of Applied Forecasting. Mike is author of The Business Forecasting Deal (Wiley, 2010) and former editor of the free e-book Forecasting with SAS: Special Collection (SAS Press, 2020). He is principal editor of Business Forecasting: Practical Problems and Solutions (Wiley, 2015) and Business Forecasting: The Emerging Role of Artificial Intelligence and Machine Learning (Wiley, 2021). In 2017 Mike received the Institute of Business Forecasting's Lifetime Achievement Award. In 2021 his paper "FVA: A Reality Check on Forecasting Practices" was inducted into the Foresight Hall of Fame. Mike initiated The Business Forecasting Deal blog in 2009 to help expose the seamy underbelly of forecasting practice, and to provide practical solutions to its most vexing problems.

10 Comments

  1. Pingback: The “avoidability” of forecast error (Part 2) - The Business Forecasting Deal

  2. Pingback: The "avoidability" of forecast error (Part 2) supplychain.com

  3. Pingback: Forecast Value Added (FVA) – Series 1 Interview | Institute of Business Forecasting & Planning - IBF Blog

  4. Pingback: FVA Interview with Jonathon Karelse - The Business Forecasting Deal

  5. Pingback: FVA Interview with Jonathon Karelse - supplychain.com

  6. Hi, I want create the comet chart in excel. But I need several points whit CV and MAPE, I would like know if is possible that I get the data.

    • Mike Gilliland
      Mike Gilliland on

      Hi Pablo, that comet chart in the blog is a photo of a hard copy comet chart that was produced several years ago and I no longer have access to the raw data. But it should be easy enough for you to create a chart with any time series data where you have forecasts and actuals (and therefore, can compute the CV and MAPE (or whatever forecast performance metric you use)). Note that I used "Forecast Accuracy" so the comet chart angles from upper left to lower right. If you use some measure of "forecast error" (such as MAPE), the chart will angle from lower left to upper right. --Mike

  7. Pingback: Changing the Paradigm for Business Forecasting (Part 10 of 12) - The Business Forecasting Deal

  8. Pingback: Forecasting research project ideas - The Business Forecasting Deal

  9. Pingback: A naive forecast is not necessarily bad - The Business Forecasting Deal

Back to Top