Forecast quality in the supply chain (Part 2)

As we saw last time with Steve Morlidge's analysis of the M3 data, forecasts produced by experts under controlled conditions with no difficult-to-forecast series still failed to beat a naive forecast 30% of the time.

So how bad could it be for real-life practitioners forecasting real-life industrial data?

In two words: Pretty bad.

The New Study

Morlidge's nine sample datasets covered 17,500 products, over an average of 29 (weekly or monthly) periods. For these real-life practitioners forecasting real-life data, 52% of forecasts had RAEs above 1.0.

FIFTY TWO PERCENT

As he puts it, "This result distressingly suggests that, on average, a company's product forecasts do not improve upon naive projections."

Morlidge also found that only 5% of the 17,500 products had RAEs below 0.5, which he has posited as a reasonable estimate of the practical lower limit for forecast error.

The Implications

What are we to make of these findings, other than gnash our teeth and curse the day we ever got ourselves suckered into the forecasting profession? While Morlidge's approach continues to receive further vetting on a broader variety of datasets, he itemizes several immediate implications for the practical task of forecasting in the supply chain:

1. RAE of 0.5 is a reasonable approximation to the best forecast that can be achieved in practice.

2. Traditional metrics (e.g. MAPE) are not particularly helpful. They do not tell you whether the forecast has the potential to be improved. And a change in the metric may indicate a change in the volatility of the data, not so much a change in the level of performance.

3. Many forecasting methods add little value.

On the positive side, his findings show that there is significant opportunity for improvement in forecast quality. He found the weighted average RAE to be well above the lower bound for forecast error (RAE = 0.5). And roughly half of all forecasts were worse than the naive forecast -- error which should be avoidable.

Of course, we don't know in advance which forecasts will perform worse than the naive forecast. But by rigorous tracking of performance over time, we should be able to identify those that are problematic. And we should always track separately the "statistical forecast" (generated by the forecasting software) and the "final forecast" (after judgmental adjustments are made) -- a distinction that was not possible in the current study.

Morlidge concludes,

...it is likely that the easiest way to make significant improvement is by eliminating poor forecasting rather than trying to optimise good forecasting. (p.31)

[You'll find a similar sentiment in this classic The BFD post, "First, do no harm."]

Hear More at Analytics 2014 in Frankfurt

Join over 500 of your peers at Analytics 2014, June 4-5 in Frankfurt, Germany. Morlidge will be presenting on "Forecasting Value Added and the Limits of Forecastability." Among the 40 presentations and four keynotes, there will also be forecasting sessions on:

  • Forecasting at Telefonica Germany
  • Promotion Forecasting for a Belgian Food Retailer, Delhaize
  • The New Analytical Mindset in Forecasting: Nestle's Approach in Europe and a Case Study of Nestle Spain
  • Big Data Analytics in the Energy Market: Customer Profiles from Smart Meter Data
  • Shopping and Entertainment: All the Figures of Media Retail

In addition, my colleague Udo Sglavo will present "A New Face for SAS Analytical Clients" -- the forthcoming web interface for SAS Forecast Server.

(Full Agenda)

Post a Comment

Forecast quality in the supply chain (Part 1)

The Spring 2014 issue of Foresight includes Steve Morlidge's latest article on the topic of forecastability and forecasting performance. He reports on sample data obtained from eight business operating in consumer (B2C) and industrial (B2B) markets. Before we look at these new results, let's review his previous arguments:

1. All extrapolative (time-series) methods are based on the assumption that the signal embedded in the data pattern will continue into the future. These methods thus seek to identify the signal and extrapolate it into the future.

2. Invariably, however, a signal is obscured by noise. A “perfect” forecast will match the signal 100% but, by definition, cannot forecast noise. So if we understand the nature of the relationship between the signal and noise in the past, we should be able to determine the limits of forecastability.

3. The most common naive forecast uses the current period actual as the forecast of the next period. As such, the average forecast error from the naïve model captures the level of noise plus changes in the signal.

4. Thus the limit of forecastability can be expressed in terms of the ratio of the actual forecast error to the naïve forecast error. This ratio is generally termed a relative absolute error (RAE). I have also christened it the avoidability ratio, because it represents the portion of the noise in the data that is reduced by the forecasting method employed.

5. In the case of a perfectly flat signal -- that is, no trend or seasonality in the data -- the best forecast quality achievable is an RAE = 0.7. So unless the data have signals that can be captured, the best forecast accuracy achievable is a 30% reduction in noise from the naïve forecast.

6. An RAE =1.0 should represent the worst forecast quality standard, since it says that the method chosen performed less accurately than a naïve forecast. In this circumstance, it might make sense to replace the method chosen with a naïve forecasting procedure. (pp. 26-27)

The M3 Study

In his previous article (Foresight 32 (Winter 2014), 34-39) Morlidge applied this approach to a segment of data from the M3 forecasting competition that was most relevant to supply chain practitioners.

The M3 competition involved 24 forecasting methods from academics and software vendors, and the 334 time series that Morlidge analyzed included no difficult-to-forecast intermittent demand patterns or new products. Yet all of the 24 forecasting methods generated RAEs above 1.0 more than 30% of the time. So nearly 1/3 of the time their performance was worse than a naive model!

Morlidge concluded that the average performance of any forecasting method may be less important than the distribution of its actual performance. He also emphasized that,

...we do not yet have the capability to identify the potential for poor forecasting before the event. It is therefore critical that actual forecast performance be routinely and rigorously measured after the event, and remedial action taken when it becomes clear that the level of performance is below expectations. (p. 28)

 In the next installment we'll look at results from the new study.

 

 

Post a Comment

Guest blogger: Len Tashman previews Spring 2014 issue of Foresight

Here is editor Len Tashman's preview of the new Spring 2014 issue of Foresight. In particular note the new article by Steve Morlidge of CatchBull, reporting on an analysis of eight B2B and B2C companies, which we'll discuss in a separate post.

An organization’s collaboration in forecasting and planning has both internal and external components. While sales and operations planning (S&OP) has become the standard infrastructure for collaborative efforts within the firm, opportunities for collaboration among supply chain partners have emerged as potential win-wins for the companies involved.

Len Tashman Editor of Foresight

Len Tashman
Editor of Foresight

Our feature article examines the characteristics, benefits, and challenges of collaborative planning, forecasting, and replenishment (CPFR) among supply chain partners. Jeff Van-Deursen and John Mello’s Roadmap to Implementing CPFR recommends specific policies that permit two or more companies to “monitor, control, and facilitate the overall performance of a supply chain by achieving a smooth flow of product between firms.” CPFR partners, however, “must integrate their supply chains, sales  organizations, and marketing intelligence into cross-organizational sales and operations planning (S&OP) and budgeting processes.”

In his Commentary on the Roadmap, Ram Ganeshan emphasizes that “CPFR is not a silver bullet for improving forecasts, but rather a set of structured processes that improve communication and coordination between supply chain partners on matching product supply and demand.” The key challenges, however, involve ensuring data integrity, standardizing forecasts, revising transactional relationships, and a willingness to deal with some significant organizational changes that could cause discord among functional units.

In our section on Forecasting Intelligence, Ram Ganeshan returns as an author to describe how retailers can benefit using Clickstream Analysis for Forecasting Online Behavior. A clickstream is an online trail, a prospective customer’s sequence of keystrokes or mouse clicks made as they consider making a purchase on the Internet. Through capture and analysis of the clickstream, a customer can provide value to a seller in helping to understand the customer’s intentions and thus improve their purchasing experience – as well as the retailer’s profitability.

Continuing Foresight’s exclusive coverage of forecastability issues (that is, the upper and lower limits of forecast accuracy), Steve Morlidge analyzes item-level forecasts at eight companies, granular data notably missing from the well-known M-competitions of forecast accuracy. Steve’s findings are an eye-opener, suggesting clearly that companies are not typically extracting maximum benefit from their forecasting systems, and showing how forecasters can upgrade Forecast Quality in the Supply Chain.

In 1981, the respected economist Julian Simon challenged the equally respected ecologist Paul Ehrlich to a wager over whether the planet’s future would entail mass starvation as population outpaced our productive resources (Ehrlich), or see the fruits of technological progress and human ingenuity sustain and improve our lives with ever-increasing abundance (Simon). Foresight Long-Range Forecasting Editor Ira Sohn expands upon Paul Sabin’s recent book The Bet to describe “the scholarly wager of the decade” and how it mirrored at the time a larger national debate between future-thinking optimists and pessimists.

We conclude this issue with a pair of book reviews on two volumes of a very different nature. Predictive Business Analytics by Lawrence Maisel and Gary Cokins seeks to show how companies can improve their usage of analytical tools. Reviewer McKay Curtis offers a blunt assessment of the book’s value to experienced analysts.

The Map and the Territory is former Federal Reserve Board chairman Alan Greenspan’s reflection on our most recent economic upheaval (which is normally dated to begin following the end of his term as chairman). Reviewer Geoff Allen tells us: “What you will learn from this book are some nice, personal Greenspan stories and a lot of data analysis to support a particular Greenspan viewpoint. Whether you will be persuaded to that viewpoint based on the evidence presented is another matter.”

Post a Comment

Q&A with Steve Morlidge of CatchBull (Part 4)

Q: ­How would you set the target for demand planners: all products at 0.7? All at practical limit (0.5)?­

A: In principle, forecasts are capable of being brought to the practical limit of an RAE of 0.5.

Whether it is sensible to attempt to do this for all products irrespective of the amount of effort and resources involved in achieving it is another matter. It would be much more sensible to set aspirations based upon considerations such as the business benefit of making an improvement, which may be based on the size of the products, the perceived scope for improvement or strategic considerations such as the importance of a set of products in a portfolio.

Target setting also has a large psychological dimension which needs to be taken into account. For example there is a lot of evidence that unrealistic targets unilaterally imposed on people can be demotivating whereas where individuals are allowed to set their own targets they are usually more stretching than those they are given to them by others.

One approach which avoids some of the pitfalls associated with traditional target setting is to strive for continuous improvement (where the target is in effect to beat past performance) in tandem with benchmarking, whereby peer pressure and the transfer of knowledge and best practice drives performance forward.

Q: ­Does the thinking change when you are forecasting multiple periods forward?­

A: In some respects the approach does not change. The limits of what can be achieved are still the same since it is given by the level of the noise in a data series compared to the level and nature of change in the signal.

Of course, the further out one forecasts the more inaccurate forecasts are likely to be because there is more opportunity for the signal to change in ways that cannot be anticipated. So while the theoretical lower bound for forecast error doesn’t change, the practical difficulties in achieving it get larger, so we would expect RAE to deteriorate the further ahead we forecast.

There is also an impact of the upper bound. With ‘one period ahead’ forecasts there is no reason why performance should be worse than the ‘same as last period’ naïve forecast (RAE=1.0). This doesn’t apply when one is forecasting more than one period ahead; the default stance of ‘use the latest actuals’ means that with a lag of n periods the upper limit should be the n period previous actual. In practice this will usually result in a maximum acceptable RAE of more than 1.0. The exact upper limit will need to be calculated by comparing the 1 period ahead naïve forecast error with the n period ahead number. The greater the trend in the data the larger this difference will be.

Q: ­Can you comment on the number of observations you would use to estimate RAE?

A: As with any measure, the more data points you have the more representative (and therefore reliable) the number. On the other hand, RAE can change over time and if you have a large amount of historical data which you then average these important shifts in performance will be lost.

As a rule of thumb I would be uncomfortable taking any significant decisions (to change forecast methods, etc.) with less than 6 data points.

Q: ­Have you found that more people inputting into the forecast adds value or destroys value?  For example, inputs from several levels of sales such as account managers, country managers, demand planners, executives, etc. ­

A: I do not have enough evidence to draw any hard and fast conclusions but I would be surprised if the answer to this question was not ‘it depends’. Some interventions made by some people at some times will be helpful; others made by other people at other times will be unhelpful.

The key to answering this question is to measure the contribution using Forecast Value Added. In addition, interventions are more likely to add value when the data series is very volatile particularly if it is driven by market place activity which is capable of being judgementally estimated reliably. If a data series is very stable, the opportunity to improve matters tends to be eclipsed by the risk of making things worse.

Post a Comment

Q&A with Steve Morlidge of CatchBull (Part 3)

Q: ­How important is it to recognize real trend change in noisy data?­

A: It is very important. In fact the job of any forecast algorithm is to predict the signal – whether it is trending or not – and to ignore the noise.

Unfortuantely this is not easy to do because the trend can be unstable and the noise confuses the situation. In fact one of the most common problems with forecasting algorithms is that they ‘overfit’; the data which means that they mistake noise for a signal.

This is the reason why experts recommend that you do not use R2 or other ‘in sample’ fitting statistics as a way of selecting which algorithm to use. The only way to know whether you have made the right is choice is by tracking performance after the event, ideally using statistic like RAE which allows for forecastability and helps you make meaningful comparisons and judgements about the level of forecast quality.

Q: ­From what you've seen, does RAE tend to be stable over time?­

A: No it does not.

In my experience performance can fluctuate over time. This might be the result of a change to the behaviour of the data series but the most common cause in my experience is a change in the quality of judgemental interventions. In particular I commonly see changes to the level of bias – systematic over or underforecasting.

A simple average will not pick this up however. You need to plot a moving average of RAE or even better an exponentially smoothed average as this takes in all available data but gives more weighting to the most recent.

Q: ­All of this assumes that there are no outside attributes available upon which to base a forecast. This all applies then only to situation in which the forecaster has only past data on the item itself to be forecasted. Is this correct?­

A: The main technique used by supply chain forecasters is time series forecasting whereby an algorithm is used to try to identify the signal in demand history which is then extrapolated into he future – on the assumption that the pattern will continue. The implicit assumption here is that the forecast (the dependent variable) is a product of the signal (the independent variable) and this there is only one this approach is termed ‘univariate’.

There are other types of forecasting which use other variables in addition to or instead of the history of the time series. These are known as ‘multivariate’ techniques since there is more than one dependant variable.

Irrespective of the approach used, the limits of forecastability still apply. There is no good reason why any forecasting method should consistently fail to beat the naïve forecast (have an RAE in excess of 1.0). The limits of what is forecastable is a product of the level of noise (which can’t be forecast by any method) compared to the level and nature of change in the signal.

Different techniques – univariate or multivariate – will be more or less successful in forecasting the signal but the same constraints apply to all.

Post a Comment

Q&A with Steve Morlidge of CatchBull (Part 2)

Q: ­Do you think the forecaster should distribute forecast accuracy to stakeholders (e.g. to show how good/bad the forecast is) or do you think this will confuse stakeholders?

A: This just depends what is meant by stakeholders. And what is meant by forecast accuracy.

If stakeholders means those people who contribute to the forecast process by providing the market intelligence that drive the judgemental adjustments made to the statistical forecast the answer is a resounding ‘yes’…at least in principle. Many forecasts are plagued with bias and a common source of bias infection is overly pessimistic or (more commonly) optimistic input from those supplying ‘market intelligence’.

Also those responsible for making the decision to investment in forecasting processes and software need to know what kind of return it has generated.

But all too often impenetrable and meaningless statistics are foisted on stakeholders using measures that are difficult for laymen to interpret and provide no indication of whether the result is good or bad.

This is why I strongly recommend using a measure such as RAE which, by clearly identifying whether and where a forecast process has added value, is easy to understand and meaningful from a business perspective.

Q: ­When you say RAE needs to be calculated at the lowest level do you mean by item, or even lower such as by item shipped by plant X to customer Y?­

A: Forecasting demand for the Supply Chain, and replenishing stock based on this forecast, is only economically worthwhile if it is possible to improve on the simple strategy of holding a defined buffer (safety) stock and replenishing it to make good any withdrawals in the period.

What implications does this have for error measurement?

First, since this simple replenishment strategy is arithmetically equivalent to using a naïve forecast (assuming no stock outs), and the level of safety stock needed to meet a given service level is determined by the level of errors (all other things being equal), if a forecast has a RAE below 1.0 it means that the business needs to hold less stock.

The second is the level of granularity at which error should be measured. Since the goal is to have the right amount of stock at the right place at the right time then error should be measured at a location/unique stock items level in buckets which (as far as possible) match the frequency at which stock is replenished. Measuring error across all locations will understate effective forecast error since having the right amount of stock in the wrong place is costly. And while it might be helpful to identify the source of error if different customers are supplied from the same stock, measuring error at a customer level will overstate effective error.

Q: ­What are your thoughts on using another model for benchmarking forecast error besides the naive model?­

A: Relative Absolute Error (RAE) is a measure which compares the average absolute forecast error with that from a simple ‘same as last period’ naïve error. This approach has the advantage of simplicity and ease of interpretation. It is easy to calculate and, since the naïve forecast is the crudest forecasting method conceivable, then a failure to beat it is something that is very easy to understand – it is baaad!

But the naïve forecast is more than a mere benchmark.

The ultimate economic justification for forecasting is that it is more efficient than a simple replenishment strategy whereby stock is maintained at a constant level by making good the sales made in the prior period. A naïve forecasts is mathematically equivalent to this strategy the degree to which a forecast improves on it is a measure of how much value a forecast has added. So RAE where the naïve forecast provides the denominator in the equation is economically meaningful in a way that would not be possible if another method was chosen.

Secondly the naïve forecast error reflects the degree of period to period volatility. This means that it a good proxy measure for the forecastability of the data set and, given certain assumptions, it is possible to make theoretical inferences about the minimum level of forecast error. As a result a specific RAE provides an objective measure of how good a forecast really is in a way that is not possible if another forecast method was used to provide the denominator in the equation. In this case the result would say as much about the performance of the benchmark method as it does about the performance of the actual method…and it would be impossible to disentangle the impact of one form another.

Post a Comment

Q&A with Steve Morlidge of CatchBull (Part 1)

In a pair of articles published in Foresight, and in his SAS/Foresight webinar "Avoidability of Forecast Error" last November, Steve Morlidge of CatchBull laid out a compelling new approach on the subject of "forecastability."

It is generally agreed that the naive model (i.e. random walk or "no change" model) provides the "worst case" for how your forecasting process should perform. With the naive model, your last observed value becomes your forecast for the future. (So if you sold 100 units last week your forecast for this week is 100. If you sell 150 this week, your forecast for next week becomes 150, and so on.)

The naive model generates forecasts with essentially no effort and no cost. So if your forecasting process forecasts worse than the naive model, something is terribly wrong. You might need to stop whatever it is you are doing and just use the naive model!

While the naive model provides what should be the "worst case" forecasting performance, a more difficult question is what is the best case? What is the best forecast accuracy we can reasonably expect for a given demand pattern? In other words, what forecast error is avoidable? I spelled out Steve's argument in a four part blog series last summer (Part 1, Part 2, Part 3, Part 4), and you can watch his webinar on-demand. He has also published a new article appearing in the Spring 2014 issue of Foresight.

In response to several questions we received about his material, Steve has graciously provided written answers which we'll share over a new series of posts.

Q&A with Steve Morlidge of CatchBull

Q: ­How does this naive forecast error work if your historic data has constant seasonality?­

A: In theory it is possible to achieve lower RAE (Relative Absolute Error) the greater the change in the signal from period to period. But in practice – usually – the more changeable the signal the more difficult it is to forecast. For this reason we find it is difficult to beat an RAE of 0.5 and it is very difficult to consistently beat 0.7.

The one exception to this general rule is seasonality. This is an example of a change in the signal which is often relatively easy to forecast. For this reason the RAE score of businesses which are predictably seasonal in nature often have an average RAE that is marginally better than those for other business. Examples of this are businesses who sell more around Christmas and other public holidays. A business which sells ice cream, for instance, is clearly seasonal but their seasonality is not predictable and so we wouldn’t expect these to achieve better score than the norm.

Despite average RAE for businesses with predictably seasonal businesses sometimes being better than the norm they usually still have a very high proportion of their portfolio with RAE in excess of 1.0.

As a result I believe the RAE metric is valid and useful even for seasonal businesses but it may be that for those products that are predictably seasonal your RAE targets should be slightly more stretching – perhaps by +/- 0.2 RAE points.

Q: ­What is a good test in Excel to determine if a data series is a random walk?­

A: If a data series approximates a random walk it is impossible to forecast in the conventional sense; the naïve (same as last period) forecast is the optimal forecast. It is likely therefore that many forecasters are wasting a lot of time and energy trying to forecast the unforecastable and destroying value in the process.

It is very difficult to spot the existence of a random walk however; it is difficult to distinguish signal from noise and very often a random walk can look like it a trend. For instance stock market price movements are very close to a random walk but there is an industry of chartists that believe they can detect patterns in the data and make predictions based on them.

Randomness is a difficult concept from a mathematical point of view – it is simply the absence of pattern. It is impossible to prove that a data sequence is random – you can only state that you can you cannot find a pattern; and there are potentially an infinite amount of patterns.

From a practical point of view the best thing to do is to compare the naïve forecast error (the ‘same as last period’ or ‘naïve 1’ method) to that from a handful of simple forecast processes; simple smoothing with and without a trend and perhaps a naïve forecast based on prior year actuals (‘naive 2’) as a simple seasonal forecasting method. If all these fail to beat the naïve forecast there is a reasonable chance that series is ‘unforecastable’ from a practical point of view and the best strategy might be to use the naïve particularly if the items is a small.

Post a Comment

When to engage the sales force in forecasting

Engaging the sales force in forecasting sounds like a good idea, doesn't it?

Compared to everyone else in the organization, don't sales people have the closest contact with our customers? Therefore, shouldn't they know better than anyone else our customers' future behavior?

There are at least three problems with assuming a priori that engaging the sales force will improve forecasting:

  1. Do sales people really know their customers' future behavior?
  2. Do sales people have any motivation to give an honest forecast?
  3. Does improving customer level forecasts even matter?

For sake of argument, lets assume affirmative answers to 1 and 2 -- that the sales force has knowledge of their customer's future behavior, and provides an honest forecast of that behavior. Can better customer level forecasts help us?

For maintaining appropriate inventory and customer service (order fill) levels, we want a good forecast by Item / Location (where location is the point of distribution, e.g. a Distribution Center (DC)). As long as we have the right inventory by Item / DC, we don't have to care what individual customers are demanding.

If volume for an Item through the DC is dominated by one customer (or a small number of customers), then it could be helpful to have more accurate Item / Customer forecasts. Improving the forecasts for these dominant customers would likely improve the forecast that matters -- the Item / DC forecast.

On the other hand, suppose the DC fills orders for dozens or hundreds or thousands of customers, none of which is more than a small percentage of total demand at that DC. In this situation, positive and negative errors in the Item / Customer forecasts will tend to cancel each other out when aggregated to Item / DC level. So even if you can improve Item / Customer Forecasts, this is unlikely to make much improvement in the forecast that matters -- the Item / DC forecast.

[Note that some organizations utilize Customer level forecasts for account planning, setting quotas for sales people, etc. So there may be other reasons you want to do Customer level forecasting. Just realize that improving the Item / DC forecast may  not be one of them.]

Efficiently Gathering Sales Force Input

If we are going to engage the sales force in forecasting, we ought to at least do this efficiently. Time spent forecasting is time taken away from building relations with customers -- and selling.

One way to gather input was suggested by Stefan de Kok of ToolsGroup:

...there is huge value in getting input from humans, sales reps included. That input however should be market intelligence, not adjustments to quantities. For example, let the sales rep input that their account is running a promotion and then let the system determine what the quantity impact is. Not only will the uplift become more accurate quickly, but also the baseline will improve. Ultimately it becomes a lower effort (but not zero) for the sales people and their forecasts become much more reliable. (source: LinkedIn discussion group.)

The idea is to minimize the time and effort from the sales force, requiring they provide information (promotional plans, new stores (or store closings), more (or less) shelf space, etc.) that can be put into the statistical forecasting models. But not requiring them to come up with specific numerical forecasts.

As always, the value added by these efforts needs to be measured -- are they making the forecast more accurate? If so, and your software can take advantage of the inputs, this is one approach.

Another approach is to provide the sales force with Item / Customer forecasts (generated by your forecasting software). Then have them make overrides when they feel they know something that wasn't already incorporated into the statistical forecasts.

This approach can be wildly ineffective and inefficient, when sales people are overriding every forecast, and not making them better. (Improvement (or not) is easily measured by FVA.)

The key is to train the sales people to only make changes when there is really good reason to (and otherwise, to just leave the statistical forecast intact). Eric Wilson of Tempur-Sealy achieved this by appealing to the competitive nature of sales people, urging them to "beat the nerd in the corner" and only make changes that they are certain will improve the nerd's statistical forecast.

Revenge of the Nerds?

There may be good reasons to engage the sales force in forecasting, we just can't assume this is always the case. When there are good reasons, focus on the efficiency and effectiveness of the inputs -- minimizing the amount of effort required to provide inputs, and measuring FVA of the results.

 

 

Post a Comment

Engaging the sales force: Forecasts vs. Commitments

Whether to engage sales people in the forecasting process remains hotly debated on LinkedIn.

While I have no objection in principle to sales people being involved in the process, I'm very skeptical of the value of doing so. Unless there is solid evidence that input from the sales force has improved the forecast (to a degree commensurate with the cost of engaging them), we are wasting their time -- and squandering company resources.

I would much rather have my sales people out playing golf, building relationships with customers, and actually generating revenue. We didn't hire them for their skills at forecasting -- we hired them to sell.

A Counter-Argument

John Hughes of Silvon provided a counter-argument in a comment on last week's The BFD post:

I see things somewhat differently.  Sales people have a responsibility to themselves and their company to try and predict sales for many good reasons.  Mostly to help balance company assets (inventory) that drive customer service.  Engaging sales people directly with an on line tool ensures their commitment to the numbers and publicly displays the results for all to see and grade them.  For example if we treated development people like you suggest treating sales people then we would never get any project completion dates and product planning would go south.  I have managed sales people for over 20 years and they like to take the path of least resistance (don't bother with forecasting) but frankly they have the same responsibility as the rest of us to commit to a task and then complete it.  Their are some fine tools out there for collecting sales inputs such as Silvon's Stratum Viewer product when combined with the Stratum Forecast engine the solution provides a complete sales forecasting and planning solution.

John provides a thoughtful argument, but let me explain why I'm still not convinced.

In the environment he describes, the sales people are not doing what I would call "forecasting" (i.e. providing an unbiased best guess at what is really going to happen), but rather committing to number. If there is a reward for beating the commitment, and a penalty for failing to achieve it, wouldn't the commitment be biased toward the low side? (Bias is easily determined by looking at history and seeing whether actuals are below, above (what I would expect), or about the same as the commitment.)

Similarly for project scheduling. If there are negative consequences to running over time and over budget, then wouldn't any rational project manager negotiate as much time and budget as possible before committing to anything?

When it comes to forecasting, can you reasonably expect to get an honest answer out of anyone?

I'm all for quotas, targets, budgets, commitments, or whatever else we have for informational and motivational use -- to keep the organization working hard and moving ahead. But these are not the same as an "unbiased best guess at what is really going to happen" which is what the forecast represents.

It is perfectly appropriate to have all these different numbers floating around the organization -- as long as we recognize their different purposes. (The fatal flaw of "one number forecasting" is that it reduces quotas, targets, budgets, commitments, and forecasts to a single number -- but they are meant to be different!)

There may be situations where sales force input can improve the forecast. Let's take a look at these situations next time, and see how to gather that input efficiently.

Post a Comment

To gather forecasting input from the sales force -- or not?

A recurring question among business forecasters is how to incorporate input from the sales force. We discussed this last year in The BFD post "Role of the sales force in forecasting." But the question came up again this week in the Institute of Business Forecasting discussion group on LinkedIn, where Leona O of Flextronics asked:

My company is using Excel to do Sales Forecasting on a monthly basis, I am looking for a solution to automate the front part where sales people will input their numbers directly in the system (instead of compiling different Excel spreadsheets currently). Our forecast period is for 36 months, please recommend a software that could automate this function.

The story of O is a familiar one. The good news is that when a company is ready to move ahead from Excel, there are plenty of forecasting software choices available. These include my personal favorites: SAS Forecast Server (for large-scale automatic forecasting) and SAS Forecasting for Desktop (which provides the same automatic forecasting capabilities for small and midsize organizations).

However, before automating the collection of input from the sales people, we need to first ask whether this is even advisable?

If it is known that the sales force inputs are adding value to the forecasting process (by making the forecast more accurate and less biased), then making it faster and less cumbersome to provide their inputs could be a very good thing. If a company hasn't already done this (and I don't know the particular circumstances at Flextronics), I would suggest they first gather data and determine whether the sales force inputs are adding value.

There are reasons why they may not be.

In addition to being untrained and unskilled (and generally uninterested) in forecasting, sales people are notoriously biased in their input. During quota setting time they will forecast low, to have easier to achieve targets. Otherwise they may forecast high, to make sure there is plenty of supply available to fill orders.

Also, if you ask someone whether they are going to hit their quota, the natural response is "Yes!" -- whether they believe they'll hit their quota or not. Why get yelled at twice (first for admitting you won't hit your quota, and then again at period end when you don't hit it), when you can just say yes you'll hit it, and then only get yelled at once (at period end when you don't hit it). Steve Morlidge made a similar point at his International Symposium on Forecasting presentation in 2012.

If you find that your sales people are not improving the forecast, then you'll make them very happy -- and give them more time to sell -- by no longer requiring their forecasting input. So rather than implement new software to gather sales input, it may be simpler, cheaper, and ultimately much more effective, to stop gathering their input. Instead, implement software to generate a better statistical forecast at the start of the process, and minimize reliance on costly human intervention.

Post a Comment
  • About this blog

    Michael Gilliland is a longtime business forecasting practitioner and currently Product Marketing Manager for SAS Forecasting. He initiated The Business Forecasting Deal to help expose the seamy underbelly of the forecasting practice, and to provide practical solutions to its most vexing problems.
  • Subscribe to this blog

    Enter your email address:

    Other subscription options

  • Archives