The beatings will continue until forecast accuracy improves

4

Question: What is the maximum level of accuracy with which you can predict the toss of a fair coin?

Answer:  50%

It does not matter that you’ve set a mandatory minimum forecast accuracy level of 90%, or even 60%.  There is no incentive, no bonus, no allocation of restricted stock options that could make me forecast a coin toss at better than 50%, nor any threat or punishment.  The only thing threats accomplish are to incent ever more clever ways to tamper with the coin; to cheat.

Which brings us to this important forecasting principle: forecast accuracy is first and foremost a property of the data itself.  At the extreme, some data is entirely random, and thus completely unforecastable.  You cannot (or at least, should not) simply set an arbitrary forecast accuracy target without first understanding the nature of the data.  In my own experience I’ve seen financial data range from 60% for fickle consumer/retail/fashion revenues, to 95%+ for certain types of costs and maintenance/recurring revenues, and everything in between.

But that doesn’t mean there are no improvements to be made.  Indeed, I can think of three levels of attack, beginning with your forecasting tools and techniques.  If your primary forecasting tool is the spreadsheet, you are limited to some rather simple approaches that leave much to be desired:

  • Last year/month plus a percentage
  • Hold-and-Roll, where you hold to the full year budget and roll the current quarter shortfall into the next quarter (or split it between the remaining quarters)
  • A simple linear extrapolation from today’s starting point to an end point that gives you the full-year answer you need.
  • Internally circular driver-based forecasting (i.e. no independent, external drivers).
  • Lots of judgment, judgment, and more judgment.

With such standard approaches you are likely to be forecasting 5-10% less accurately than you theoretically could.  The application of rigorous analytical forecasting methods that statistically derive trends, seasonality and identify exceptional events can improve your current forecast by roughly 10% (i.e. from 60% to 66%).

The next place to attack is the data quality.  Fixing problems of missing and misattributed data from disparate information systems and data silos can potentially have almost as big an impact as the use of analytical forecasting techniques itself.  Incompatible ERP data, bundled offerings that obscure crucial bill-of-material details, missing variables, or the application of generic attributes at summary levels can have a huge negative impact on your ability to produce an accurate, workable forecast.

Having accomplished those two tasks, we are left with the most difficult, but often the one with the most potential for improvement – the data itself, its sources, and the behaviors that contribute to “unforecastable” data.  This especially applies to “human” source forecasts from sales, marketing and operations.  What sort of biases might be present in such judgment laden forecasts: sandbagging, forecasting to the wall, expectations of overachievement/stretch goals, over-optimism/over confidence, poor awareness/assessment of risks, hiding problems in the hope that they will correct themselves later, accounting or data recording inadequacies, failure to follow policy and procedure, information-hoarding and a lack of communication, and a whole host of other untested assumptions.

I recently had a discussion with a director of IT services for a high tech firm that needed to improve its forecast accuracy of IT chargebacks to the business units from its IT shared services center, where the current level of accuracy was a dismal 70%.  He knew they could do better, and wanted my advice on how to best hold the forecast analysts accountable – essentially a “beat it out of them” management style.  While I agreed that he should be able to easily achieve a forecast accuracy of 90% or better with costs of this type, I disagreed that the cause was likely to be lazy or incompetent forecast analysts – not with a discrepancy that wide.  I suggested that his forecast analysts were probably having data problems straight from the source.  Perhaps the capital budget process was being circumvented, or subcontractors were being used haphazardly, or overruns and behind-schedule projects were being deliberately disguised as still being on track.  He didn’t quite get it until I explained about predicting the coin toss – hopefully my intervention has forestalled some carnage among his forecast analysts for the time being.  [What I didn't have the nerve to tell him was that his own management style was likely a contributing factor, as his department heads and program managers did their best to hide bad news from him: "Oh my goodness - We missed the forecast by 30%!!  Again! How did finance let this happen?"]

All of this, however, is still just prelude for what you do with that forecast, which is to “plan” and make decisions.  We don’t forecast just for the sake of having something against which to measure actuals at the end of the period.  The forecast feeds and informs the “plan” (see the diagram in my prior post, “Rolling Forecasts” depicting the relationship between the forecast, the plan and the budget).  There should be no knee-jerk reactions to changing forecasts.  We don’t call it Sales and Operations "Planning" for nothing.  Forecast accuracy is just a way station on the road to the really important metric – performance against plan.

A key part of that plan is – What are you going to do about the random portion of your forecast?  Let’s say you are forecasting at the theoretical maximum accuracy for the revenue data you are working with, say 80%.  The real question is – what are you going to do about the other 20%, which might be 20% over forecast, or 20% under forecast, or anywhere in between?  What actions are you going to take, what contingencies have you planned for?

Will you build or draw down inventory?  Change pricing?  Allocate to favored customers?  Absorb expedite costs?  Overtime or a second shift?  Can you shift demand, or shape it?   The airlines have become masters at demand shifting, changing prices in real time on different routes and offering travel vouchers on oversold flights.  Hotels have an even more difficult task, because while I can always wait to travel until tomorrow, I cannot wait until tomorrow to sleep, hence the emergence of the non-refundable first-night deposit a decade ago.  When it comes to non-perishable inventory, we see demand shifting approaches like product substitution and in-store coupons regularly employed in order to manage the inherent uncertainty in the forecast.

The smaller the random component, of course, the easier it is for the enterprise to meet plan, so by all means, utilize analytical forecasting and data quality techniques to improve forecast accuracy.  Once your forecast is as good as you can get it, shift your focus to developing and deploying better options for dealing with the inevitable variability.  In this direction lies profitability and customer satisfaction.

Share

About Author

Leo Sadovy

Marketing Director

Leo Sadovy currently manages the Analytics Thought Leadership Program at SAS, enabling SAS’ thought leaders in being a catalyst for conversation and in sharing a vision and opinions that matter via excellence in storytelling that address our clients’ business issues. Previously at SAS Leo handled marketing for Analytic Business Solutions such as performance management, manufacturing and supply chain. Before joining SAS, he spent seven years as Vice-President of Finance for a North American division of Fujitsu, managing a team focused on commercial operations, alliance partnerships, and strategic planning. Prior to Fujitsu, Leo was with Digital Equipment Corporation for eight years in financial management and sales. He started his management career in laser optics fabrication for Spectra-Physics and later moved into a finance position at the General Dynamics F-16 fighter plant in Fort Worth, Texas. He has a Masters in Analytics, an MBA in Finance, a Bachelor’s in Marketing, and is a SAS Certified Data Scientist and Certified AI and Machine Learning Professional. He and his wife Ellen live in North Carolina with their engineering graduate children, and among his unique life experiences he can count a singing performance at Carnegie Hall.

Back to Top