The Coefficient of Variation for assessing forecastability


The Spring 2009 Foresight feature on assessing forecastability is a must-read for anyone who gets yelled at for having lousy forecasts. (It should also be read by those who do the yelling, but you’d have to be living in Neverland to believe that will ever happen.) As I promised in yesterday's guest blogging by Len Tashman, Editor of Foresight, here are a few comments on this topic.

Why is it that some things can be forecast with relatively high accuracy (e.g. the time of sunrise every morning for years into the future), while other things cannot be forecast with much accuracy at all, no matter how sophisticated our approach (e.g. calling heads or tails in the tossing of a fair coin)? Begin by thinking of behavior as having a structured, or rule-guided, or deterministic component, along with a random component. To the extent that we can understand and model the deterministic component, then (assuming we have modeled it correctly and the rule guiding the behavior doesn’t change over time) the accuracy of our forecasts is limited only by the degree of randomness.

Coin tossing gives a perfect illustration of this. With a fair coin, the behavior is completely random. Over the long term, our forecast (Heads or Tails) will be correct 50% of the time and there is nothing we can do to improve on it. Our accuracy is limited by the nature of the behavior – that it is entirely random.

While suffering from many imperfections (as Peter Catt rightly points out in his article), the Coefficient of Variation (CV) is still a pretty good quick-and-dirty indicator of forecastability in typical business forecasting situations. Compute CV based on sales for each entity you are forecasting over some time frame, such as the past year. Thus, if an item sells an average of 100 units per week, with a standard deviation of 50, then CV = standard deviation / mean = .5 (or 50%).

It is useful to create a scatterplot relating CV to the forecast accuracy you achieve. In this scatterplot of data from a consumer goods manufacturer, there are roughly 5000 points representing 500 items sold through 10 DCs. Forecast accuracy (0 to 100%) is along the vertical axis, CV (0 to 160% (truncated)) is along the horizontal axis. As you would expect, with lower sales volatility (CV near 0), the forecast was generally much more accurate than for item/DC combinations with high volatility.

The line through this scatterplot is NOT a best fit regression line. It can be called the “Forecast Value Added Line” and shows the approximate accuracy you would have achieved using a simple moving average as your forecast model for each value of CV. The way to interpret the diagram is that for item/DC combinations falling above the FVA Line, this organization’s forecasting process was “adding value” by producing forecasts more accurate than would have been achieved by a moving average. Overall, this organization's forecasting process added 4 percentage points of value, achieving 68% accuracy versus 64% for the moving average. The plot also identifies plenty of instances where the process made the forecast worse (those points falling below the line), and these would merit further investigation.

Such a scatterplot (and use of CV) doesn’t answer the more difficult question – how accurate can we be? But I'm pretty convinced that the surest way to get better forecasts is to reduce the volatility of the behavior you are trying to forecast. While we may not have any control over the volatility of our weather, we actually do have a lot of control over the volatility of demand for our products and services. More about this another time…


About Author

Mike Gilliland

Product Marketing Manager

Michael Gilliland is author of The Business Forecasting Deal (the book), and editor of Business Forecasting: Practical Problems and Solutions. He is a longtime business forecasting practitioner and currently Product Marketing Manager for SAS Forecasting software. He initiated The Business Forecasting Deal (the blog) to help expose the seamy underbelly of the forecasting practice, and to provide practical solutions to its most vexing problems.


  1. Pingback: The “avoidability” of forecast error (Part 1) - The Business Forecasting Deal

  2. Pingback: The "avoidability" of forecast error (Part 1)

  3. Anthony Gouveia on

    Hi Mike,

    Good article. It's been a long time since our days at Answer Think. I'm trying to introduce the CV concept in my current company to assess the forecastability of a product and ss. Looking at the literature for various cv forecastability thresholds. We find ourselves using the same ss calculation for all demand behavior. Products with high intermittent demand (thus high cv) is killing us (i.e.periods of product shortages and other periods of high inventory). Any suggestions?

  4. Pingback: Forecasting research project ideas - The Business Forecasting Deal

Leave A Reply

Back to Top