Just how naive are you?


Aren’t the internets wonderful? Just today I was trying to find the antonym of “naïve” and came across several terrific choices (sophisticated, worldly, well-informed, and intelligent) and one that didn’t make any sense (svelte???). However, upon further review at Merriam-Webster.com, I discovered that in addition to slender, lithe, and sleek (the definitions I expected), svelte could also mean urbane or suave. So a person could actually be svelte and obese at the same time. I never would have known that – thank you Al Gore!

In real life, it is probably a good thing to be informed, skeptical, and difficult to be taken advantage of. In short, it is good to be svelte. This applies to your (hopefully limited) encounters with strange men at highway rest stops, as much as it does to your (hopefully even more limited) encounters with forecasting software vendors.

Despite my plea that you remain svelte in real life, I implore you to be naïve in business forecasting – and use a naïve forecasting model early and often. A naïve forecasting model is the most important model you will ever use in business forecasting. It should also be the worst forecasting model you will ever use – but probably won’t be. Let me explain…

Per the standard forecasting text, naïve forecasts are “Forecasts obtained with the minimal amount of effort and data manipulation and based solely on the most recent information available.” An important characteristic of a naïve forecasting model is that it can be easily automated and produced at virtually no cost -- without the need for forecasters or forecasting software. This is important because it sets a baseline for performance. If you can achieve X% error using a naïve model, then you sure as heck better be able to achieve less than X% error with whatever people and process and technology you are using to forecast. This is the fundamental idea behind Forecast Value Added Analysis, where you compare all forecasting process activities to “doing nothing” and eliminate those activities that aren’t making the forecast any better.

Purists may argue that the only true naïve forecast is the “no-change” forecast, meaning either a random walk (forecast = last known actual) or a seasonal random walk (e.g. forecast = actual from corresponding period last year). These are referred to as NF1 and NF2 in the Makridakis text (where NF = Naïve Forecast). In our 2006 SAS webseries Finding Flaws in Forecasting, an attendee asked “What about using a simple time series forecast with no intervention as the naïve forecast?” Is that allowed?

Our purpose is to determine whether all our elaborate forecasting systems and processes are adding value by making the forecast better. For this objective, it is perfectly acceptable to use something more sophisticated than a random walk as another point of comparison in the FVA analysis. A thorough FVA analysis evaluates the performance of every step and participant in the forecasting process. If you have forecasting software that will automatically generate forecasts for you (essentially for “free” once you have licensed and installed the software), it is important to know whether that system generated forecast is any better than NF1 or NF2. The key is comparing costly and heroic forecasting efforts to forecasts created by doing the minimum amount of work. Does the extra cost and effort make a meaningful improvement in the forecast? If not, then the cost and effort probably aren’t worth it.

I personally won’t report you to the forecasting police if you do use something a bit more sophisticated than NF1 or NF2 as your naïve model. A moving average or simple exponential smoothing are suitable choices. However, I will report you for failing to do the appropriate comparisons – and you know what happens then.


About Author

Mike Gilliland

Product Marketing Manager

Michael Gilliland is a longtime business forecasting practitioner and formerly a Product Marketing Manager for SAS Forecasting. He is on the Board of Directors of the International Institute of Forecasters, and is Associate Editor of their practitioner journal Foresight: The International Journal of Applied Forecasting. Mike is author of The Business Forecasting Deal (Wiley, 2010) and former editor of the free e-book Forecasting with SAS: Special Collection (SAS Press, 2020). He is principal editor of Business Forecasting: Practical Problems and Solutions (Wiley, 2015) and Business Forecasting: The Emerging Role of Artificial Intelligence and Machine Learning (Wiley, 2021). In 2017 Mike received the Institute of Business Forecasting's Lifetime Achievement Award. In 2021 his paper "FVA: A Reality Check on Forecasting Practices" was inducted into the Foresight Hall of Fame. Mike initiated The Business Forecasting Deal blog in 2009 to help expose the seamy underbelly of forecasting practice, and to provide practical solutions to its most vexing problems.

1 Comment

  1. Mike, excellent blog post! And I do appreciate the humor....
    As a practioner using FVA, I did have to get creative with some of the naive methods. For example, simply taking the most naive models wouldn't even get me close to exponential behavior that a new or ramping product or market would experience and other lifecycle phenomena. The caution in finding the right naive model for each of the steps is that if the naive model doesn't provide a somewhat reasonable baseline, then you can be at risk of stakeholders trusting the FVA analysis to reveal something insightful. The stakeholders are typically very senstive regarding this type of analysis of their forecasts so it is up to you as the FVA expert to determine the correct balance of naivety or "svelteness" vs. realism of the expected behavior when determining a naive model. Just make sure to make it "intellectually honest" and only use data that would have been available to you at the time of forecast (not hindsight) - like life cycle models for example.

Back to Top