Forecast Value Added: A Reality Check on Forecasting Practices

1

If an organization is spending time and money to have a forecasting process, is it not reasonable to expect the process to make the forecast more accurate and less biased (or at least not make it any worse!)? But how would we ever know what the process is accomplishing?

To find out, register for this quarter's installment of the Foresight-SAS webinar series:

Forecast Value Added Analysis: A Reality Check on Forecasting Practices (Thursday June 20, 11:00am EDT)

This webinar is based on an article appearing in the Spring 2013 issue of Foresight: The International Journal of Applied Forecasting. The editors of Foresight have graciously made the article available for free download: Foresight FVA Article.

Join us Thursday. Meantime, here is a short preview...

---------------------------

In our jobs and in our lives we have to make decisions about the future. These decisions are based on some expectation (or "forecast") of what the future will bring. So if we expect demand for Product X to be 10,000 units per month for the rest of 2013, we'll make decisions about production or procurement of inventory, how we will distribute it, how much we'll sell it for, and many others.

To make the best decisions, it helps to have a good forecast (one that is high in accuracy and low in bias). To achieve good forecasting, organizations are wont to spend time and resources on a forecasting process.

The most basic process involves forecasting software and an analyst to monitor the software and perhaps provide manual overrides to the computer generated forecast. In the more elaborate processes we find in larger enterprises, there may be multiple process steps and participants (from sales, marketing, finance, operations, and the executive suite) to review and provide their own adjustments to the forecast.

While it may sound great in theory, an elaborate forecasting process with many human touch points can be a case of "too many cooks in the kitchen." We may assume that every participant has something worthwhile to add, and that each forecast adjustment gets us closer to perfectly predicting the future. But how would we know?

Traditional forecasting performance metrics like accuracy, error (e.g. MAPE), or bias, can tell us the magnitude of our forecasting imperfection. But they don't tell us how good we should be able to be, or how efficient our process was at getting the accuracy we achieved, or -- most important -- whether our process made the forecast any better at all.

Forecast Value Added (FVA) is the metric adopted by many organization to evaluate the effectiveness of their forecasting process. What is sometimes discovered, after conducting FVA analysis, is that all of our heroic efforts just made the forecast worse. The touch points for human intervention, instead of incorporating knowledge that made the forecast better, simply allowed process participants to add their own biases and personal agendas. But without doing FVA analysis, you'd never know.

 

Share

About Author

Mike Gilliland

Product Marketing Manager

Michael Gilliland is author of The Business Forecasting Deal (the book), and editor of Business Forecasting: Practical Problems and Solutions. He is a longtime business forecasting practitioner, and currently Product Marketing Manager for SAS Forecasting software. Mike serves on the Board of Directors for the International Institute of Forecasters, and received the 2017 Lifetime Achievement in Business Forecast award from the Institute of Business Forecasting. He initiated The Business Forecasting Deal (the blog) to help expose the seamy underbelly of forecasting practice, and to provide practical solutions to its most vexing problems.

Back to Top