To make it easy to identify non-value adding areas, you can build a simple application using SAS® Visual Analytics software. Such an application lets you point and click your way through the organization’s forecasting hierarchy, and at each point view performance of the Naïve, Manual, Statistical, and Automated forecasts (or
Tag: naive forecast
To properly evaluate (and improve) forecasting performance, we recommend our customers use a methodology called Forecast Value Added (FVA) analysis. FVA lets you identify forecasting process waste (activities that are failing to improve the forecast, or are even making it worse). The objective is to help the organization generate forecasts
As we saw in Steve Morlidge's study of forecast quality in the supply chain (Part 1, Part 2), 52% of the forecasts in his sample were worse than a naive (random walk) forecast. This meant that over half the time, these companies would have been better off doing nothing and
The Spring 2014 issue of Foresight includes Steve Morlidge's latest article on the topic of forecastability and forecasting performance. He reports on sample data obtained from eight business operating in consumer (B2C) and industrial (B2B) markets. Before we look at these new results, let's review his previous arguments: 1. All
Q: Is the MAPE of the naive forecast the basis for understanding the forecastability of the behavior? Or are there other more in depth ways to measure the forecastability of a behavior? MAPE of the naive forecast indicates the worst you should be able to forecast the behavior. You can
Q: Company always try to forecast 12 or 24m ahead. Whether we should track accuracy of 1m/3m/ 6m or x month forecast, does that depend on lead time? How to determine out of these 12/24 months, which month should we track accuracy? Correct, forecast performance is usually evaluated against the
Q: What is a legitimate goal to expect from your FVA...5%, 10%? Q: How do we set Target FVA which Forecasters can drive towards? The appropriate goal is to do no worse than a naive model, that is FVA ≥ 0. Sometimes, especially over short periods of time, you may
Last time we saw two situations where you wouldn't bother trying to improve your forecast: When forecast accuracy is "good enough" and is not constraining organizational performance. When the costs and consequences of a less-than-perfect forecast are low. (Another situation was brought to my attention by Sean Schubert of