Nine common forecasting errors

0

Professor Paul Goodwin from the Management School at the University of Bath in England was the keynote presenter at the A2009 Analytics Conference in Copenhagen. He revealed some interesting research that he and his colleagues observed about the way companies attempt to predict the future at various points in their supply chains. Surprisingly, the research found that even though companies invest in and use analytical software, the companies adjusted up to 93 percent of forecasts generated by the software. Goodwin's presentation included great information from the research that helped the audience see why they should Let the computer make the forecast - not the boss. He also outlined nine common forecasting errors:

1. Anchoring and adjusting – When making an estimate people often start with an initial value – the anchor – and then adjust from this. But estimates tend to be too close to the anchor, even if it is an implausible value. Anchoring can cause people to underestimate upward trends because they stay too close to the most recent value, which can be particularly severe for exponential trends.

Solution: Use statistical methods, rather than judgment, to forecast trends.

2. Seeing patterns in randomness – Human beings have a tendency to see systematic patterns even where there are none. People love storytelling and are brilliant at inventing explanation for random movements in graphs.

Solution: Don’t believe that you can do a better job than your forecasting system.

3. Putting your calling card on the forecast – People tend to make many small adjustments and only a few large adjustments to their forecasts. But small adjustments to software’s forecasts waste time and often reduce accuracy.

Solution: Only adjust for important reasons, and document the reasons.

4. Attaching too much weight to judgment relative to statistical forecasts – Even though evidence shows judgment is less accurate than statistical forecasts people continue to rely on their judgment.

Solutions: Have more confidence in statistical methods and adjust them only when you are sure it is absolutely necessary. Or, take a simple average of the statistical forecast and your independent judgmental forecast so they are equally weighed.

5. Recency bias – Companies often don’t want to use data that goes more than a few years back because the trends were different. But statistical methods embedded in software need plenty of data to give reliable forecasts.

Solution: Give your software a chance, then you might not need to adjust its forecasts. Many statistical methods are designed to adapt to changes in trends, and they are far less likely than human judges to see false new trends in recent data.

6. Optimism bias – People have an innate bias toward optimism, psychologists say. Even though optimism bias is a sign of good mental health, negative (downward) adjustments are more successful than positive.

Solutions: Break complex judgments into smaller parts, for example adjust for price reduction, promotion and new customers separately instead of making a total adjustment. And consider keeping a database of estimated effects of past special events like sales promotions.

7. Political bias – Adjustment to forecasts based on internal political bias can make marketing managers look good when they provide overly optimistic forecasts.

Solutions: Require adjustments to be documented and review their effect on accuracy. Argue for relying on statistical methods for forecasts unless very special circumstances apply.

8. Confusing forecasts with decisions – A forecast is a best estimate of what will happen in the future: “I think we’ll sell 200 units.” A decision is a number designed to achieve one or more objectives: “I think we should produce 250 units in case demand is unexpectedly high, to balance the possibility of lost sales with the costs of holding more stock.”

Solution: Separate the forecast from the decision. Obtain your forecast first, label it as a forecast and then use it as a basis for your decision.

9. Group biases – Having statistical forecasts adjusted by a group of people can be dangerous as most people don’t feel comfortable going against group decisions.

Solution: Use the Delphi Method, in which panelists provide forecasts individually and privately. The results of the polling are then tallied and statistics fed back. Re-polling takes place and the process is repeated until consensus emerges. Median estimate is then used as the forecast.

Share

About Author

Waynette Tubbs

Editor, Marketing Editorial

Waynette Tubbs is a seasoned technology journalist specializing in interviewing and writing about how leaders leverage advanced and emerging analytical technologies to transform their B2B and B2C organizations. In her current role, she works closely with global marketing organizations to generate content about artificial intelligence (AI), generative AI, intelligent automation, cybersecurity, data management, and marketing automation. Waynette has a master’s degree in journalism and mass communications from UNC Chapel Hill.

Comments are closed.

Back to Top