What do men really want? What do women really want? If you seek answers to these eternal questions go watch Oprah or Dr. Phil – I don’t really care. They are not nearly as interesting as the question: What do forecasting software buyers really want?
Organizations spend millions of dollars every year on new forecast software, purportedly to generate more accurate forecasts. But are demonstrable accuracy improvements really required to close a sale? Must the software actually be good at forecasting? Or is the buying decision driven by other interests and motivations?
My colleague Mike Leonard, R&D Manager for SAS forecasting software, provided an interesting perspective during his keynote address last week at F2009. In his presentation “Why Bad Forecasting Support Systems are Bought and Sold,” Mike argued that the IT professionals and end users responsible for choosing forecasting software rarely check for accuracy. Instead, they are more concerned about whether the software allows them control the data (IT) or control the forecast (User). IT does not want anything involving software development or allowing data exploration – their power base is in the control of information. Users don’t want anything requiring learning or hiring analytical skills -- they want to maintain the power to control the forecast and bias it in their favor. Both IT and users simply want software that “checks all the boxes” in the requirements list, and can be blamed if the forecast is poor.
The political nature of the forecasting function is well understood. The forecast should be an objective, dispassionate, and unbiased “best guess” at what is really going to happen in the future. What often happens instead is that the ones controlling the forecast manipulate the numbers to drive behavior in their favor. If you are pitching a new product idea to management, you probably aren’t going to forecast it will fail in the marketplace (although that would be the “best guess” for most new product introductions). Your forecast will, instead, meet the minimum volume hurdle your organization uses for getting new products approved. There are similar biases throughout the organization.
• During quota setting time, sales reps may forecast low so that low quotas will be set and they can more easily make their bonuses.
• During the rest of the year, sales reps may forecast high to drive higher inventory levels (and lessen the chance of lost sales due to stockouts).
• Manufacturing and procurement may forecast low to drive lower inventory levels (thereby improving their performance objective), and give them an excuse if sales come in higher and they can’t fill orders.
• Executives may internally (within their organization) forecast high, to use the forecast as a target or stretch-goal to motivate behavior.
• Executives may externally forecast low, to manage investor expectations and look like heroes at quarter end.
Is there anyone you can trust for an honest forecast? Probably not. Anyone with authority to touch a forecast may seek to place a favorable (to them!) spin on the numbers.
While software buyers may have more interest in controlling the forecast for their personal agendas than in getting accuracy, what about software vendors? Is there any incentive for vendors to develop software that actually follows the principles of good forecasting? Mike Leonard addressed this topic as well.
Mike argued that by having fixed data models and no capability for data exploration, bad software makes no intrusion on the IT power base – so IT will like it. By having no capability for interactive modeling the Users will like it – for they will not have to learn anything new or hire analysts. And by making it easy to make manual adjustments and bias the forecast, the Users will like it even more – for there is no intrusion on their power base of control over the forecast.
While an admittedly “overly cynical” view of forecasting – that clearly does not apply to all buyers and sellers – I am sympathetic to Mike’s arguments. Software vendors cannot be entrusted to implement good forecasting practices (such as found in the Forecasting Principles), as there may be no financial incentive to do so. (It is easier to sell a fairy tale than sell harsh reality.) So what should buyers do? Next week I’ll get into specific issues in the software selection process, and some of the dirty tricks of selling.