"A favorable product mix caused us to miss our forecast on the upside", said no one ever

3

As leaders and managers of human beings with million year-old brain structures, as part of our managerial toolkit we need to keep ourselves knowledgeable about psychology and the cognitive science of how people make decisions.  You have undoubtedly read about how innately bad we are at making certain types of decisions, especially those involving risk, probabilities and shifting time horizons.

(artwork by Igor Morski)

Part of the reason for this difficulty is the structure and function of our three-layered brain.  The complexity and size of the neocortex, especially the pre-frontal cortex, is a very recent evolutionary development.  Prior to this development, our mammalian ancestors still made decisions, but they did so relying heavily on the more intuitive, emotional limbic layer.  It would be fairly accurate to say that our emotions are simply a different way to make a decision.  It’s quick, can still be trained by experience and learning, and can be wired directly into rapid motor responses that most likely saved our ancestors lives countless times, who faced more binary decisions than probabilistic ones.  Gut-feel is really "brain-feel".

In fact our decision making process today is surprisingly still dependent on emotion-driven components that we cannot ignore.  Recent brain research using fMRI techniques show that even our most advanced, complex decisions all correlate with brain activity in a limited number of sub-cortical structures: the striatum, the amygdala and the insula. Our much celebrated neocortex functions more as a tie-breaker than as an impartial computing arbiter of the truth, unless we specifically direct it to do so through attention and concentration.

That attention and concentration in turn requires source material to work with – it needs data, in a form factor it can process.  Today I want to discuss the data our brain needs to evaluate one of the most important aspects of business planning – The Forecast.

We need to treasure our forecast, we really do.  It’s the only piece of forward looking data we have.  We have tons and tons of backwards looking data, but only a scant few lines per month that constitute the forecast.  Consider what you have invested in collecting and reporting on the past and present:  $10-$15 million-plus invested in ERP systems that dutifully record the movement of each and every cost element.  Big Data is largely historical data, quite important in its own right in identifying problems and root causes, and establishing the current status of the system/organization.  The forecast, on the other hand, is comparatively puny, a handful of rows in a spreadsheet or in a BP&F system.  So be it – we must make do with what we have, and make the best of it.

I have already dealt previously with certain transformational aspects of forecasting, such as driver-based forecasting, and utilizing different forecast sources and their appropriate time horizons, so I won’t repeat that here.  Instead I have a more radical proposition for you:  applying analytical tools and techniques to your forecasting process even if you are not going to use them for actually forecasting.

Let me explain.  Even if you choose in the end NOT to use any analytical tools to actually produce your forecast (unlikely, but not relevant to my point), there is still tremendous value in utilizing these analytical techniques to assess your overall forecasting process. I offer four examples to support my case.

First, assuming you are not using an analytical forecasting tool to produce your forecast, you must therefore be forecasting primarily, if not solely, based on business judgment.  Sales forecasts come in from the field based on the sales funnel and pipeline, product or store forecasts come in from merchandizing, marketing or store management, marketing and product management provide their longer range forecasts, the supply chain has their say, and in the end finance and the executive staff finesse the numbers one last time before it becomes THE FORECAST.

How good is this FORECAST?  A simple question, really – how good is it?  It’s a fairly trivial exercise for an analytical software package to compare your history of judgment-based monthly sales, marketing and management forecasts to the actual results.  And I don’t mean the one-time comparison we do once a quarter on which we incent various team members, I mean the entire range of 30 day, 90 day, 180 day, annual and eighteen-month rolling forecasts compared to what really transpired.   My guess is that anything outside of 90 days is probably a disaster for cash and production planning purposes, and that even inside 90 days it’s only the supply chain with their ERP backlog and production data that come close enough for comfort.

So, my first recommendation:  Test your management forecast against reality.  You can still use it in the future if you want, but wouldn’t it be nice to know if it’s worth basing important decisions on?

As for step 2, my guess is that you are not only going to find out that gut-level forecasts have a dismal track record for accuracy, but that you will readily discern some basic reasons for this deficiency:

  • Forecasting to the wall
  • Sandbagging
  • Over-optimism, or as someone else has less generously put it: “Why get yelled at more than once for missing your annual target.”

You know how the story goes.  Everyone makes Q1, because the budget didn’t come out until March anyhow.  Q2 came up just a little short, but that will of course all be made up in Q3 and Q4.  When you ask why the Q3 miss, they remind you that they did say it might be until Q4 that things recover.  Finally, we get to Q4, the hockey stick moment of truth, when you sit them down in front of you and make them commit in blood to the inventory purchase orders you are about to sign based on their “forecast”.

If this story sounds familiar (October was the first time I would get a straight answer out of my sales teams), there are some steps you can take to remedy the situation, the primary one being to disassociate the forecast from the rest of the incentive system.  Recall where I place Forecasting in the overall scheme of the financial planning process, as an input to the PLAN, not connected directly to the budget. Your forecast is tenuous enough as it is, don’t make it a completely worthless exercise by incenting people to hide the truth.  Make some headway ungaming the forecast and maybe you can still avoid using an analytical forecasting tool.

The forecast should be an input to the plan – this precept leads to my third suggestion:  Use an analytical forecasting tool to delineate your confidence boundaries.  You remember: one standard deviation covers about 65% of a normal distribution, two standard deviations covers 95%.  Pick a couple of confidence levels to test against your historical data, say 90% and 70%, see what they look like, and use these to tie back into and guide your planning process, back into your Optimistic / Pessimistic scenarios, and your Best Case / Worst Case scenarios.

One thing you will notice immediately is that your pessimism is not pessimistic enough, and your worst case is not even close to being worthy of the title.  Your planning and budgeting is based on a most favorable product mix that has almost no chance of actually coming to pass. There is more variability in your business than you realize; your intuitive, gut-level instinct as to how bad things can get in the normal course of business is just not as trustworthy as the data itself.  What is the data telling you? How far afield is your own historical data telling you that you might drift in 180 or 360 days?  Once again, you don’t have to use any analytical software for your next 18-month rolling forecast, but that same software can be quite valuable in helping you set proper planning ranges.

Lastly, once you’ve come to terms with your human decision-making frailties, there is one more application of analytical forecasting that once again doesn’t involve actually forecasting – disaggregate your forecast.  Consider the adjacent graphic showing the historical and forecasted behavior of two different products /SKUs / product groups.  Simply averaging them, along with hundreds or thousands of others and doing your planning based on that simple, consolidated average is going to mean that you will significantly miss BOTH of them. One will experience higher costs for stock-outs, expedite fees and backorders while the other incurs unnecessarily higher inventory carrying costs.  Even if you don’t forecast directly with the software, there is great, immediate cash value in using it to determine the proper level for disaggregation of your detailed forecast.

Having gotten to this point, the value in using real data to challenge and test your intuitive, business-judgment based forecasting process should be obvious.  Having made the above corrections and refinements you can still proceed with your customary judgment-based forecast, but if you want to take the data-driven approach even further, you can start with the user-friendly, wizard-driven process embedded in SAS Financial Management (FM).  If you’ve got a smattering of analytical skills among your staff then you could perhaps further utilize the more robust capabilities of SAS Forecast Server  in conjunction with FM’s budgeting, planning and forecasting capability.

I’m not trying to downplay the role or value of experience,  judgment and intuition, just pointing out the obvious - that it has its biases, biases that can be significantly mitigated by introducing hard data into the decision making process.  As Donald Rumsfeld might have said, you don’t forecast with the brain you wish you had, you forecast with the data that you do have.

Share

About Author

Leo Sadovy

Marketing Director

Leo Sadovy currently manages the Analytics Thought Leadership Program at SAS, enabling SAS’ thought leaders in being a catalyst for conversation and in sharing a vision and opinions that matter via excellence in storytelling that address our clients’ business issues. Previously at SAS Leo handled marketing for Analytic Business Solutions such as performance management, manufacturing and supply chain. Before joining SAS, he spent seven years as Vice-President of Finance for a North American division of Fujitsu, managing a team focused on commercial operations, alliance partnerships, and strategic planning. Prior to Fujitsu, Leo was with Digital Equipment Corporation for eight years in financial management and sales. He started his management career in laser optics fabrication for Spectra-Physics and later moved into a finance position at the General Dynamics F-16 fighter plant in Fort Worth, Texas. He has a Masters in Analytics, an MBA in Finance, a Bachelor’s in Marketing, and is a SAS Certified Data Scientist and Certified AI and Machine Learning Professional. He and his wife Ellen live in North Carolina with their engineering graduate children, and among his unique life experiences he can count a singing performance at Carnegie Hall.

Back to Top