Real vs. Perceived Implementation Failures

0

An alarming percentage of major software implementations fail to be delivered on time, on budget, or even at all. Implementations of new forecasting software, or of new forecasting processes, are not immune from this legacy of failure. Why does this happen, and is there anything we can do about it?

A project that does not deliver what was expected (or promised) will be perceived as a failure. A major cause of perceived forecasting failures is the mismanagement of expectations for what forecasting can realistically deliver. We all dream of an automated system that provides unbiased and highly accurate forecasts, no matter how unmannerly the behavior we are asking it to forecast. But this is unrealistic. There are good automated forecasting systems that provide significant benefits by reducing the costs of forecasting—minimizing the analyst time required, and eliminating the need for elaborate processes—and freeing the organization to focus on its most critical and high-value forecasts. But even the best system can only forecast as accurately as the nature of the behavior allows. No software can correctly call the tossing of a fair coin 60% of the time, no matter how costly and complex.

Perceptions of failure aside, there are also true failures. A true failure occurs when the software:

 Never gets installed.
 Is installed (a success from the information technology (IT) standpoint) but is never used.
 Is installed and used—but simply cannot forecast to the degree of accuracy expected and desired.

Forecasting software never gets installed for a variety of reasons. Once purchased, management’s attention may be drawn to other issues, and the software gets relegated to “shelf-ware” with no attempt at implementation. Or, issues may arise during installation, most commonly related to the availability and quality of data and gaps in IT infrastructure. If the organization cannot provide the data needed, the installation will flounder and can lose management’s interest and support (particularly if the project’s sponsor or champion is lost). An installation can also fail by the “toss a big check over the fence” syndrome, which is when management funds the software purchase but lacks serious commitment of resources or other support to see the project through.

Once forecasting software is successfully installed (from a technical standpoint), the project can still fail if the software is never used. The organization may realize they bought the wrong solution, either overly complex and feature laden for what was needed, or else lacking important capabilities. Perhaps the software was not a good fit to their existing process and workflow, or the underlying business problem. Or perhaps the software doesn’t integrate well with their existing IT infrastructure—particularly with downstream planning systems.

It is also possible that users were not engaged in the software selection, or their choice was overruled, and they simply resist converting to a system they don’t like. Or the software may not have been thoroughly vetted, proving unable to handle the quantity of real-life data by crashing, or being crippled by an unresponsive user interface. Rejection by users may also be due to poor software configuration by the implementation team, such as a hierarchy that is overly complex, or that lacks necessary levels.

Last of all, an implementation may fail because the software is incapable of generating usable forecasts. (As a minimum standard for usefulness, the forecasts should be at least as accurate as a naïve model.) There may be flaws in the mathematical calculations in the software, or the particular methods employed may be inappropriate or unsound. The proof is not in the sophistication of the models or their ability to explain the past. The proof is in the quality of the (future) forecasts. If the forecasting output cannot be trusted, and the software is not helping the organization perform better than it had before, the implementation is a failure.

For more on this topic, including suggestions for preproject assessment, eliminating RFPs, evaluating vendors, and the warning signs of failure, see Chapter 7 Implementing a Forecasting Solution in The Business Forecasting Deal (the book).

Tags
Share

About Author

Mike Gilliland

Product Marketing Manager

Michael Gilliland is a longtime business forecasting practitioner and formerly a Product Marketing Manager for SAS Forecasting. He is on the Board of Directors of the International Institute of Forecasters, and is Associate Editor of their practitioner journal Foresight: The International Journal of Applied Forecasting. Mike is author of The Business Forecasting Deal (Wiley, 2010) and former editor of the free e-book Forecasting with SAS: Special Collection (SAS Press, 2020). He is principal editor of Business Forecasting: Practical Problems and Solutions (Wiley, 2015) and Business Forecasting: The Emerging Role of Artificial Intelligence and Machine Learning (Wiley, 2021). In 2017 Mike received the Institute of Business Forecasting's Lifetime Achievement Award. In 2021 his paper "FVA: A Reality Check on Forecasting Practices" was inducted into the Foresight Hall of Fame. Mike initiated The Business Forecasting Deal blog in 2009 to help expose the seamy underbelly of forecasting practice, and to provide practical solutions to its most vexing problems.

Comments are closed.

Back to Top