The Dirty Tricks of Selling #1: Show how you fit

2

Tricks aren’t just for kids (or Louisiana senators or New York governors for that matter). Tricks are the lifeblood for many a forecasting software salesperson. Why admit that forecasting is difficult, that most things can’t be forecast as accurately as we would like, or that your software has the statistical capabilities of a turnip? It is so much easier to sell forecasting software with false promise and deception.

Trick #1: Show how you fit, not how you forecast

Most people realize how difficult it is to generate accurate forecasts. But have you ever realized how easy it is to fit a model to historical behavior??? Think about it: If you have two historical data points, you can get perfect fit with a line. If you have three points, you can get perfect fit with a quadratic. As you increase the number of data points in your time-series history, you can always find a polynomial that fits the history perfectly. Unfortunately, fitting history is not the business problem. The problem is generating a decent forecast.

Forecasting vendors love to demonstrate how well their models fit your history. Mean Absolute Percent Error (MAPE) of the historical fit is always fabulous, rarely more than a few percentage points. The unspoken (and sometimes even spoken) implication is that their software can model your history and, therefore, solve your forecasting problem. This is the implication in the online Microsoft demo that purports to show how to use Excel to generate forecasts. (If you haven’t viewed this 60-second crime against human decency already, please take a minute to do so.)

Forecasting software needs to be evaluated by how well it forecasts, not by how well it fits your history. Proper assessment of forecasting performance tells you two important things. First, it lets you distinguish competing software packages – which ones can forecast worth a darn, and which ones can’t. Second, it can give you an indication of what kind of accuracy is “reasonable to expect” when you implement a new package. Expecting “fit to history” to be a reliable indicator of forecast accuracy is a sure path to disappointment.

Tags
Share

About Author

Mike Gilliland

Product Marketing Manager

Michael Gilliland is a longtime business forecasting practitioner and formerly a Product Marketing Manager for SAS Forecasting. He is on the Board of Directors of the International Institute of Forecasters, and is Associate Editor of their practitioner journal Foresight: The International Journal of Applied Forecasting. Mike is author of The Business Forecasting Deal (Wiley, 2010) and former editor of the free e-book Forecasting with SAS: Special Collection (SAS Press, 2020). He is principal editor of Business Forecasting: Practical Problems and Solutions (Wiley, 2015) and Business Forecasting: The Emerging Role of Artificial Intelligence and Machine Learning (Wiley, 2021). In 2017 Mike received the Institute of Business Forecasting's Lifetime Achievement Award. In 2021 his paper "FVA: A Reality Check on Forecasting Practices" was inducted into the Foresight Hall of Fame. Mike initiated The Business Forecasting Deal blog in 2009 to help expose the seamy underbelly of forecasting practice, and to provide practical solutions to its most vexing problems.

2 Comments

  1. I just found this blog and I am having so much fun reading it, thanks for putting this together, I just feel much better that my day to day forecasting problems are shared by other people.
    I just enjoyed the Excel demo to create forecasts, truly it is a crime against human decency.
    Sad part, is that it is standard practice.
    Great blog, I am enjoying it a lot

  2. Pingback: Changing the paradigm for business forecasting (Part 6) - The Business Forecasting Deal

Back to Top