Incorporating demand planner knowledge


Can you explain the "random error" in your forecasts?

This question was posed two weeks ago by Sam Iosevich, Managing Principal at Prognos, during his presentation  at the INFORMS Conference on Business Analytics and Operations Research. Sam stated that if your planners have knowledge that helps explain the "random error" in your forecasts, then that error is not completely random, and such knowledge should be incorporated into your forecasting model.

Incorporating Demand Planner Knowledge into the Statistical Forecast

In his book Demand-Driven Forecasting (pp. 6-7), my colleague Charlie Chase asserted that "...there is no art to forecasting, but rather statistics and domain knowledge." He argued that we cannot turn a gut feeling into a number, but instead must "...access the data and conduct the analytics to validate [our]assumptions."

Charlie, like Sam, is avowing a scientific approach to forecasting: develop a hypothesis, find the data, and conduct the analysis to determine whether you can reject the hypothesis. The results are then used to adjust the statistical forecast, or better yet, " those assumptions into the statistical baseline forecast by adding the additional data and revising the analytics."

In his presentation, Sam showed there are two types of domain knowledge:

  • Explanative -- insight into a demand anomaly in the past.
  • Predictive -- insight into the future which may or may not have historical precedence.

For example, a flood (or other natural disaster) or competitive activity in the past can be modeled to help establish the "true" baseline demand in the historical period. This is "explanative" domain knowledge.

Past (and planned future) promotional activity can be modeled to establish the baseline historical demand (and promotional lift), and be incorporated into future forecasts. This is an example of "explanative and predictive" domain knowledge.

Sam points out the benefits of properly incorporating domain knowledge:

  • Increase statistical vigor
  • Reduce Judgmental overrides
  • Eliminate organizational bias
  • Facilitate mathematical optimization of supply chain planning

To achive this is a four step process:

  1. Identify - Not all Domain Knowledge can or should be modeled.  Ask organization which factors effect demand, and then evaluate the effort vs. potential benefit or statistically modeling
  2. Model - Capture test data and create statistical model
  3. Evaluate - Validate that results including Domain Knowledge model add value to the forecast
  4. Imbed - Incorporate capturing value add domain Knowledge into the Demand Planning Processes, and model statistically

For more thorough explanation and details of this approach, you can reach Sam at



About Author

Mike Gilliland

Product Marketing Manager

Michael Gilliland is a longtime business forecasting practitioner and formerly a Product Marketing Manager for SAS Forecasting. He is on the Board of Directors of the International Institute of Forecasters, and is Associate Editor of their practitioner journal Foresight: The International Journal of Applied Forecasting. Mike is author of The Business Forecasting Deal (Wiley, 2010) and former editor of the free e-book Forecasting with SAS: Special Collection (SAS Press, 2020). He is principal editor of Business Forecasting: Practical Problems and Solutions (Wiley, 2015) and Business Forecasting: The Emerging Role of Artificial Intelligence and Machine Learning (Wiley, 2021). In 2017 Mike received the Institute of Business Forecasting's Lifetime Achievement Award. In 2021 his paper "FVA: A Reality Check on Forecasting Practices" was inducted into the Foresight Hall of Fame. Mike initiated The Business Forecasting Deal blog in 2009 to help expose the seamy underbelly of forecasting practice, and to provide practical solutions to its most vexing problems.

1 Comment

Back to Top