Q&A with Steve Morlidge of CatchBull (Part 3)

Q: ­How important is it to recognize real trend change in noisy data?­

A: It is very important. In fact the job of any forecast algorithm is to predict the signal – whether it is trending or not – and to ignore the noise.

Unfortuantely this is not easy to do because the trend can be unstable and the noise confuses the situation. In fact one of the most common problems with forecasting algorithms is that they ‘overfit’; the data which means that they mistake noise for a signal.

This is the reason why experts recommend that you do not use R2 or other ‘in sample’ fitting statistics as a way of selecting which algorithm to use. The only way to know whether you have made the right is choice is by tracking performance after the event, ideally using statistic like RAE which allows for forecastability and helps you make meaningful comparisons and judgements about the level of forecast quality.

Q: ­From what you've seen, does RAE tend to be stable over time?­

A: No it does not.

In my experience performance can fluctuate over time. This might be the result of a change to the behaviour of the data series but the most common cause in my experience is a change in the quality of judgemental interventions. In particular I commonly see changes to the level of bias – systematic over or underforecasting.

A simple average will not pick this up however. You need to plot a moving average of RAE or even better an exponentially smoothed average as this takes in all available data but gives more weighting to the most recent.

Q: ­All of this assumes that there are no outside attributes available upon which to base a forecast. This all applies then only to situation in which the forecaster has only past data on the item itself to be forecasted. Is this correct?­

A: The main technique used by supply chain forecasters is time series forecasting whereby an algorithm is used to try to identify the signal in demand history which is then extrapolated into he future – on the assumption that the pattern will continue. The implicit assumption here is that the forecast (the dependent variable) is a product of the signal (the independent variable) and this there is only one this approach is termed ‘univariate’.

There are other types of forecasting which use other variables in addition to or instead of the history of the time series. These are known as ‘multivariate’ techniques since there is more than one dependant variable.

Irrespective of the approach used, the limits of forecastability still apply. There is no good reason why any forecasting method should consistently fail to beat the naïve forecast (have an RAE in excess of 1.0). The limits of what is forecastable is a product of the level of noise (which can’t be forecast by any method) compared to the level and nature of change in the signal.

Different techniques – univariate or multivariate – will be more or less successful in forecasting the signal but the same constraints apply to all.

Post a Comment

Q&A with Steve Morlidge of CatchBull (Part 2)

Q: ­Do you think the forecaster should distribute forecast accuracy to stakeholders (e.g. to show how good/bad the forecast is) or do you think this will confuse stakeholders?

A: This just depends what is meant by stakeholders. And what is meant by forecast accuracy.

If stakeholders means those people who contribute to the forecast process by providing the market intelligence that drive the judgemental adjustments made to the statistical forecast the answer is a resounding ‘yes’…at least in principle. Many forecasts are plagued with bias and a common source of bias infection is overly pessimistic or (more commonly) optimistic input from those supplying ‘market intelligence’.

Also those responsible for making the decision to investment in forecasting processes and software need to know what kind of return it has generated.

But all too often impenetrable and meaningless statistics are foisted on stakeholders using measures that are difficult for laymen to interpret and provide no indication of whether the result is good or bad.

This is why I strongly recommend using a measure such as RAE which, by clearly identifying whether and where a forecast process has added value, is easy to understand and meaningful from a business perspective.

Q: ­When you say RAE needs to be calculated at the lowest level do you mean by item, or even lower such as by item shipped by plant X to customer Y?­

A: Forecasting demand for the Supply Chain, and replenishing stock based on this forecast, is only economically worthwhile if it is possible to improve on the simple strategy of holding a defined buffer (safety) stock and replenishing it to make good any withdrawals in the period.

What implications does this have for error measurement?

First, since this simple replenishment strategy is arithmetically equivalent to using a naïve forecast (assuming no stock outs), and the level of safety stock needed to meet a given service level is determined by the level of errors (all other things being equal), if a forecast has a RAE below 1.0 it means that the business needs to hold less stock.

The second is the level of granularity at which error should be measured. Since the goal is to have the right amount of stock at the right place at the right time then error should be measured at a location/unique stock items level in buckets which (as far as possible) match the frequency at which stock is replenished. Measuring error across all locations will understate effective forecast error since having the right amount of stock in the wrong place is costly. And while it might be helpful to identify the source of error if different customers are supplied from the same stock, measuring error at a customer level will overstate effective error.

Q: ­What are your thoughts on using another model for benchmarking forecast error besides the naive model?­

A: Relative Absolute Error (RAE) is a measure which compares the average absolute forecast error with that from a simple ‘same as last period’ naïve error. This approach has the advantage of simplicity and ease of interpretation. It is easy to calculate and, since the naïve forecast is the crudest forecasting method conceivable, then a failure to beat it is something that is very easy to understand – it is baaad!

But the naïve forecast is more than a mere benchmark.

The ultimate economic justification for forecasting is that it is more efficient than a simple replenishment strategy whereby stock is maintained at a constant level by making good the sales made in the prior period. A naïve forecasts is mathematically equivalent to this strategy the degree to which a forecast improves on it is a measure of how much value a forecast has added. So RAE where the naïve forecast provides the denominator in the equation is economically meaningful in a way that would not be possible if another method was chosen.

Secondly the naïve forecast error reflects the degree of period to period volatility. This means that it a good proxy measure for the forecastability of the data set and, given certain assumptions, it is possible to make theoretical inferences about the minimum level of forecast error. As a result a specific RAE provides an objective measure of how good a forecast really is in a way that is not possible if another forecast method was used to provide the denominator in the equation. In this case the result would say as much about the performance of the benchmark method as it does about the performance of the actual method…and it would be impossible to disentangle the impact of one form another.

Post a Comment

Q&A with Steve Morlidge of CatchBull (Part 1)

In a pair of articles published in Foresight, and in his SAS/Foresight webinar "Avoidability of Forecast Error" last November, Steve Morlidge of CatchBull laid out a compelling new approach on the subject of "forecastability."

It is generally agreed that the naive model (i.e. random walk or "no change" model) provides the "worst case" for how your forecasting process should perform. With the naive model, your last observed value becomes your forecast for the future. (So if you sold 100 units last week your forecast for this week is 100. If you sell 150 this week, your forecast for next week becomes 150, and so on.)

The naive model generates forecasts with essentially no effort and no cost. So if your forecasting process forecasts worse than the naive model, something is terribly wrong. You might need to stop whatever it is you are doing and just use the naive model!

While the naive model provides what should be the "worst case" forecasting performance, a more difficult question is what is the best case? What is the best forecast accuracy we can reasonably expect for a given demand pattern? In other words, what forecast error is avoidable? I spelled out Steve's argument in a four part blog series last summer (Part 1, Part 2, Part 3, Part 4), and you can watch his webinar on-demand. He has also published a new article appearing in the Spring 2014 issue of Foresight.

In response to several questions we received about his material, Steve has graciously provided written answers which we'll share over a new series of posts.

Q&A with Steve Morlidge of CatchBull

Q: ­How does this naive forecast error work if your historic data has constant seasonality?­

A: In theory it is possible to achieve lower RAE (Relative Absolute Error) the greater the change in the signal from period to period. But in practice – usually – the more changeable the signal the more difficult it is to forecast. For this reason we find it is difficult to beat an RAE of 0.5 and it is very difficult to consistently beat 0.7.

The one exception to this general rule is seasonality. This is an example of a change in the signal which is often relatively easy to forecast. For this reason the RAE score of businesses which are predictably seasonal in nature often have an average RAE that is marginally better than those for other business. Examples of this are businesses who sell more around Christmas and other public holidays. A business which sells ice cream, for instance, is clearly seasonal but their seasonality is not predictable and so we wouldn’t expect these to achieve better score than the norm.

Despite average RAE for businesses with predictably seasonal businesses sometimes being better than the norm they usually still have a very high proportion of their portfolio with RAE in excess of 1.0.

As a result I believe the RAE metric is valid and useful even for seasonal businesses but it may be that for those products that are predictably seasonal your RAE targets should be slightly more stretching – perhaps by +/- 0.2 RAE points.

Q: ­What is a good test in Excel to determine if a data series is a random walk?­

A: If a data series approximates a random walk it is impossible to forecast in the conventional sense; the naïve (same as last period) forecast is the optimal forecast. It is likely therefore that many forecasters are wasting a lot of time and energy trying to forecast the unforecastable and destroying value in the process.

It is very difficult to spot the existence of a random walk however; it is difficult to distinguish signal from noise and very often a random walk can look like it a trend. For instance stock market price movements are very close to a random walk but there is an industry of chartists that believe they can detect patterns in the data and make predictions based on them.

Randomness is a difficult concept from a mathematical point of view – it is simply the absence of pattern. It is impossible to prove that a data sequence is random – you can only state that you can you cannot find a pattern; and there are potentially an infinite amount of patterns.

From a practical point of view the best thing to do is to compare the naïve forecast error (the ‘same as last period’ or ‘naïve 1’ method) to that from a handful of simple forecast processes; simple smoothing with and without a trend and perhaps a naïve forecast based on prior year actuals (‘naive 2’) as a simple seasonal forecasting method. If all these fail to beat the naïve forecast there is a reasonable chance that series is ‘unforecastable’ from a practical point of view and the best strategy might be to use the naïve particularly if the items is a small.

Post a Comment

When to engage the sales force in forecasting

Engaging the sales force in forecasting sounds like a good idea, doesn't it?

Compared to everyone else in the organization, don't sales people have the closest contact with our customers? Therefore, shouldn't they know better than anyone else our customers' future behavior?

There are at least three problems with assuming a priori that engaging the sales force will improve forecasting:

  1. Do sales people really know their customers' future behavior?
  2. Do sales people have any motivation to give an honest forecast?
  3. Does improving customer level forecasts even matter?

For sake of argument, lets assume affirmative answers to 1 and 2 -- that the sales force has knowledge of their customer's future behavior, and provides an honest forecast of that behavior. Can better customer level forecasts help us?

For maintaining appropriate inventory and customer service (order fill) levels, we want a good forecast by Item / Location (where location is the point of distribution, e.g. a Distribution Center (DC)). As long as we have the right inventory by Item / DC, we don't have to care what individual customers are demanding.

If volume for an Item through the DC is dominated by one customer (or a small number of customers), then it could be helpful to have more accurate Item / Customer forecasts. Improving the forecasts for these dominant customers would likely improve the forecast that matters -- the Item / DC forecast.

On the other hand, suppose the DC fills orders for dozens or hundreds or thousands of customers, none of which is more than a small percentage of total demand at that DC. In this situation, positive and negative errors in the Item / Customer forecasts will tend to cancel each other out when aggregated to Item / DC level. So even if you can improve Item / Customer Forecasts, this is unlikely to make much improvement in the forecast that matters -- the Item / DC forecast.

[Note that some organizations utilize Customer level forecasts for account planning, setting quotas for sales people, etc. So there may be other reasons you want to do Customer level forecasting. Just realize that improving the Item / DC forecast may  not be one of them.]

Efficiently Gathering Sales Force Input

If we are going to engage the sales force in forecasting, we ought to at least do this efficiently. Time spent forecasting is time taken away from building relations with customers -- and selling.

One way to gather input was suggested by Stefan de Kok of ToolsGroup:

...there is huge value in getting input from humans, sales reps included. That input however should be market intelligence, not adjustments to quantities. For example, let the sales rep input that their account is running a promotion and then let the system determine what the quantity impact is. Not only will the uplift become more accurate quickly, but also the baseline will improve. Ultimately it becomes a lower effort (but not zero) for the sales people and their forecasts become much more reliable. (source: LinkedIn discussion group.)

The idea is to minimize the time and effort from the sales force, requiring they provide information (promotional plans, new stores (or store closings), more (or less) shelf space, etc.) that can be put into the statistical forecasting models. But not requiring them to come up with specific numerical forecasts.

As always, the value added by these efforts needs to be measured -- are they making the forecast more accurate? If so, and your software can take advantage of the inputs, this is one approach.

Another approach is to provide the sales force with Item / Customer forecasts (generated by your forecasting software). Then have them make overrides when they feel they know something that wasn't already incorporated into the statistical forecasts.

This approach can be wildly ineffective and inefficient, when sales people are overriding every forecast, and not making them better. (Improvement (or not) is easily measured by FVA.)

The key is to train the sales people to only make changes when there is really good reason to (and otherwise, to just leave the statistical forecast intact). Eric Wilson of Tempur-Sealy achieved this by appealing to the competitive nature of sales people, urging them to "beat the nerd in the corner" and only make changes that they are certain will improve the nerd's statistical forecast.

Revenge of the Nerds?

There may be good reasons to engage the sales force in forecasting, we just can't assume this is always the case. When there are good reasons, focus on the efficiency and effectiveness of the inputs -- minimizing the amount of effort required to provide inputs, and measuring FVA of the results.



Post a Comment

Engaging the sales force: Forecasts vs. Commitments

Whether to engage sales people in the forecasting process remains hotly debated on LinkedIn.

While I have no objection in principle to sales people being involved in the process, I'm very skeptical of the value of doing so. Unless there is solid evidence that input from the sales force has improved the forecast (to a degree commensurate with the cost of engaging them), we are wasting their time -- and squandering company resources.

I would much rather have my sales people out playing golf, building relationships with customers, and actually generating revenue. We didn't hire them for their skills at forecasting -- we hired them to sell.

A Counter-Argument

John Hughes of Silvon provided a counter-argument in a comment on last week's The BFD post:

I see things somewhat differently.  Sales people have a responsibility to themselves and their company to try and predict sales for many good reasons.  Mostly to help balance company assets (inventory) that drive customer service.  Engaging sales people directly with an on line tool ensures their commitment to the numbers and publicly displays the results for all to see and grade them.  For example if we treated development people like you suggest treating sales people then we would never get any project completion dates and product planning would go south.  I have managed sales people for over 20 years and they like to take the path of least resistance (don't bother with forecasting) but frankly they have the same responsibility as the rest of us to commit to a task and then complete it.  Their are some fine tools out there for collecting sales inputs such as Silvon's Stratum Viewer product when combined with the Stratum Forecast engine the solution provides a complete sales forecasting and planning solution.

John provides a thoughtful argument, but let me explain why I'm still not convinced.

In the environment he describes, the sales people are not doing what I would call "forecasting" (i.e. providing an unbiased best guess at what is really going to happen), but rather committing to number. If there is a reward for beating the commitment, and a penalty for failing to achieve it, wouldn't the commitment be biased toward the low side? (Bias is easily determined by looking at history and seeing whether actuals are below, above (what I would expect), or about the same as the commitment.)

Similarly for project scheduling. If there are negative consequences to running over time and over budget, then wouldn't any rational project manager negotiate as much time and budget as possible before committing to anything?

When it comes to forecasting, can you reasonably expect to get an honest answer out of anyone?

I'm all for quotas, targets, budgets, commitments, or whatever else we have for informational and motivational use -- to keep the organization working hard and moving ahead. But these are not the same as an "unbiased best guess at what is really going to happen" which is what the forecast represents.

It is perfectly appropriate to have all these different numbers floating around the organization -- as long as we recognize their different purposes. (The fatal flaw of "one number forecasting" is that it reduces quotas, targets, budgets, commitments, and forecasts to a single number -- but they are meant to be different!)

There may be situations where sales force input can improve the forecast. Let's take a look at these situations next time, and see how to gather that input efficiently.

Post a Comment

To gather forecasting input from the sales force -- or not?

A recurring question among business forecasters is how to incorporate input from the sales force. We discussed this last year in The BFD post "Role of the sales force in forecasting." But the question came up again this week in the Institute of Business Forecasting discussion group on LinkedIn, where Leona O of Flextronics asked:

My company is using Excel to do Sales Forecasting on a monthly basis, I am looking for a solution to automate the front part where sales people will input their numbers directly in the system (instead of compiling different Excel spreadsheets currently). Our forecast period is for 36 months, please recommend a software that could automate this function.

The story of O is a familiar one. The good news is that when a company is ready to move ahead from Excel, there are plenty of forecasting software choices available. These include my personal favorites: SAS Forecast Server (for large-scale automatic forecasting) and SAS Forecasting for Desktop (which provides the same automatic forecasting capabilities for small and midsize organizations).

However, before automating the collection of input from the sales people, we need to first ask whether this is even advisable?

If it is known that the sales force inputs are adding value to the forecasting process (by making the forecast more accurate and less biased), then making it faster and less cumbersome to provide their inputs could be a very good thing. If a company hasn't already done this (and I don't know the particular circumstances at Flextronics), I would suggest they first gather data and determine whether the sales force inputs are adding value.

There are reasons why they may not be.

In addition to being untrained and unskilled (and generally uninterested) in forecasting, sales people are notoriously biased in their input. During quota setting time they will forecast low, to have easier to achieve targets. Otherwise they may forecast high, to make sure there is plenty of supply available to fill orders.

Also, if you ask someone whether they are going to hit their quota, the natural response is "Yes!" -- whether they believe they'll hit their quota or not. Why get yelled at twice (first for admitting you won't hit your quota, and then again at period end when you don't hit it), when you can just say yes you'll hit it, and then only get yelled at once (at period end when you don't hit it). Steve Morlidge made a similar point at his International Symposium on Forecasting presentation in 2012.

If you find that your sales people are not improving the forecast, then you'll make them very happy -- and give them more time to sell -- by no longer requiring their forecasting input. So rather than implement new software to gather sales input, it may be simpler, cheaper, and ultimately much more effective, to stop gathering their input. Instead, implement software to generate a better statistical forecast at the start of the process, and minimize reliance on costly human intervention.

Post a Comment

Upcoming forecasting events

SAS/Foresight Webinar Series

On Thursday February 20, 11am ET, join Martin Joseph, Managing Owner of Rivershill Consultancy for this quarter's installment of the SAS/Foresight Webinar Series.

Martin Joseph

Martin Joseph

Martin will be presenting "The Forecasting Mantra" -- a template that identifies the elements required to achieve sustained, world-class forecasting and planning excellence. He'll also provide a diagnostic tool for assessing the quality and efficiency of your current forecasting and planning processes, and help you design new ones.

Register now. And click here to get a sneak peek of the webinar.

Institute of Business Forecasting Conference

A number of interesting presentations are on the schedule at the annual IBF Supply Chain Forecasting Conference in Scottsdale, AZ, running this Sunday February 23 - 25. Some particular sessions that cover Forecast Value Added (FVA) analysis are:

  • "Segmentation: Roadmap to Configurable Demand" (Eric Wilson, Tempur Sealy International). Covers the use of segmentation using attributes and FVA to create differentiated demand approaches.
  • "The Art and Science of Forecasting: When to Use Judgment?" (Jonathon Karelse, NorthFind Partners and Aaron Simms, Molex). Illustrates use of the "Forecastability Matrix" to guide statistical versus judgmental inputs, and how to track forecast accuracy and FVA.
  • "Applying Forecast Value Added at Cardinal Health" (Scott Finley, Cardinal Health and me). I'll give a 25 minute primer on FVA, and Scott will talk about what's going on at Cardinal Health.

Other great learning and networking opportunities include the "Business Forecasting and Planning Forum" -- a moderated panel discussion focusing on controversial issues in the field on Monday 8:15am. Also, Journal of Business Forecasting editor Chaman Jain is speaking on "How to Maximize Revenue and Profit from New Products," and longtime IBF contributor Scott Roy (Wells Enterprises) is speaking on "Improving Demand Planning Performance."

On Monday afternoon 4-5, my colleague Charlie Chase and I will be moderating Roundtable Discussion topics (mine is on "Worst Practices in Forecasting"). Charlie and I will also be signing complimentary copies of our books at the SAS exhibit booth, so be sure to visit us.

Finally on Tuesday morning, 10-10:30am, SAS will be launching a new offering, with a product demonstration by Ed Katz. We are all hoping to see you there.

Post a Comment

The miracle of combining forecasts

Life gifts us very few miracles. So when a miracle happens, we must be prepared to embrace it, and appreciate its worth.
Dogs in snow

Winter Storm Pax

In 1947, in New York City, there was the Miracle on 34th Street.

In 1980, at the Winter Olympics, there was the miracle on ice.

In 1992, at the Academy Awards, there was the miracle of Marisa Tomei winning the Best Supporting Actress Oscar.

And in 2014, on Wednesday afternoon this week, there was the miracle of getting off the SAS campus in the middle of winter storm Pax.

There are also those "officially recognized" miracles that can land a person in sainthood. These frequently involve images burned into pancakes or grown into fruits and vegetables (e.g. the Richard Nixon eggplant). While I have little chance of becoming a saint, I have witnessed a miracle in the realm of business forecasting: the miracle of combining forecasts.

A Miracle of Business Forecasting

Last week's installment of The BFD highlighted an interview with Greg Fishel, Chief Meteorologist at WRAL, on the topic of combined or "ensemble" models in weather forecasting. In this application, multiple perturbations of initial conditions (minor changes to temperature, humidity, etc.) are fed through the same forecasting model. If the various perturbations deliver wildly different results, this indicates a high level of uncertainty in the forecast. If the various perturbations deliver very similar results, the weather scientists consider this reason for good confidence in the forecast.

In Fishel's weather forecasting example, they create the ensemble forecast by passing multiple variations of the input data through the same forecasting model. This is different from typical business forecasting, where we feed the same initial conditions (e.g. a time series of historical sales) into multiple models. We then take a composite (e.g. an average) of the resulting forecasts, and that becomes our combined or ensemble forecast.

In 2001, J. Scott Armstrong published a valuable summary of the literature in "Combining Forecasts" in his Principles of Forecasting. Armstrong's work is referenced heavily in a recent piece by Graefe, Armstrong, Jones, and Cuzan in the International Journal of Forecasting (30 (2014) 43-54). Graefe et. al. remind us of the conditions under which combining is most valuable, and illustrate with an application to election forecasting. Since I am not much fond of politics or politicians, we'll skip the elections part, but look at the conditions where combining can help:

  • "Combining is applicable to many estimation and forecasting problems. The only exception is when strong prior evidence exists that one method is best and the likelihood of bracketing is low" (p.44). ["Bracketing" occurs when one forecast was higher than the actual, and one was lower.] This suggests that combining forecasts should be our default method. We should only select one particular model when there is strong evidence it is best. However in most real-world forecasting situations, we cannot know in advance which forecast will be most accurate.
  • Combine forecasts from several methods. Armstrong recommended using at least five forecasts. These forecasts should be generated using methods that adhere to accepted forecasting procedures for the given situation. (That is, don't just make up a bunch of forecasts willy-nilly.)
  • "Combining forecasts is most valuable when the individual forecasts are diverse in the methods used and the theories and data upon which they are based" (p.45). Such forecasts are likely to include different biases and random errors -- that we expect would help cancel each other out.
  • The larger the difference in the underlying theories or methods of component forecasts, the greater the extent and probability of error reduction through combining.
  • Weight the forecasts equally when you combine them. "A large body of analytical and empirical evidence supports the use of equal weights" (p.46). There is no guarantee that equal weights will produce the best results, but this is simple to do, easy to explain, and a fancier weighting method is probably not worth the effort.
  • "While combining is useful under all conditions, it is especially valuable in situations involving high levels of uncertainty" (p.51).

So forget about achieving sainthood the hard way. (If burning a caricature of Winston Churchill in a grilled cheese sandwich were easy, I'd be Pope by now). Instead, deliver a miracle to your organization the easy way -- by combining forecasts.

[For further discussion of combining forecasts in SAS forecasting software, see the 2012 SAS Global Forum paper "Combined Forecasts: What to Do When One Model Isn't Good Enough" by my colleagues Ed Blair, Michael Leonard, and Bruce Elsheimer.]

Post a Comment

WRAL weather forecaster more than a pretty face

I've always thought of TV weather forecasters as just talking heads. Sure they look pretty, waving hands in front of fancy green-screen graphics, reading poetically off the teleprompters, and standing fearlessly in the midst of the worst storm conditions. But could we expect man candy as tart as Al Roker and Willard Scott to actually know anything about science and math?

Well, maybe not Al and Willard. But Greg Fishel, Chief Meteorologist at WRAL in Raleigh, is bringing the goods.

In a recent post on the WRAL WeatherCenter Blog by Nate Johnson, Fishel is interviewed on the topic of ensemble forecasting. This 10 minute video is worth a look.

Deterministic vs. Ensemble Weather Forecasting Models

First Fishel describes the traditional "deterministic" weather model. In this approach, observered initial conditions (temperature, pressure, etc., from various observation points) are fed into a computer model. These initial conditions provide the current state of the atmosphere, from which the model derives the state of the atmosphere at some point (e.g. 7 days) in the future.

Everyone realizes that we can't expect a perfect prediction of next week's weather, which is what the deterministic model purports to deliver. In fact, we don't even have perfect knowledge of the current state of the atmosphere, since we have only a finite number of weather monitors reporting conditions at particular locations.

The ensemble approach, as Fishel explains, takes the initial condition data, and perturbs the data points (e.g. slightly changing the temperature and pressure at each point, in various ways), creating an ensemble of perhaps 50 sets of initial condition data. Each variation of initial conditions is run through the same model, and the resulting solutions are compared.

If all versions give essentially the same result a week out, this would imply that the atmosphere is not overly sensitive to small variations in initial conditions, and this would merit more confidence in the forecast.

If the different versions of input data resulted in wildy different forecasts, we might have much less confidence in our weather prediction.

Application to Business Forecasting

What is happening here, which is a very good lesson for business forecasters as well, is acknowledgment that it is impossible to make a perfectly accurate forecast. As Fishel puts it, "I don't think there is anything wrong, in a highly uncertain situation, to be honest with the public and say 'we don't know how this will play out, but here are the most likely scenarios.'" Then as a consumer of the weather forecast, you can plan accordingly.

An indication of uncertainty is a valuable addition to the typical "point forecast" that just tells us one number. For example, telling your inventory planner that the demand forecast is for "100 +/- 100 units" might lead to a different inventory position than a forecast of "100 +/- 10 units."

So forget about Al, Willard, and the long list of celebrities* who served up the weather at some point in their careers. Not only is Greg Fishel good looking enough to land a nightly gig on a mid-market CBS affiliate, he can teach us a thing or two about forecasting!


*Including David Letterman, Pat Sajak, and Raquel Welch.

Post a Comment

IBF Scottsdale: FVA at Cardinal Health

IBF Conference Brochure CoverWhere is global warming when you need it?

Throughout much of the southeast, life has been at a standstill since midday yesterday, when 2" of snow and 20oF temperatures brought civilization to its knees. If your life, or at least your forecasting career, is at a similar standstill, make plans to join us February 23-25 for the Institute of Business Forecasting's Supply Chain Forecasting Conference in Scottsdale, AZ.

February is a great time to be in Arizona, with beautiful weather and the rattlesnakes still in hibernation. The IBF event offers a full day Fundamentals of Demand Planning & Forecasting Tutorial by Mark Lawless on Sunday the 23rd, with three tracks of regular sessions Monday through Tuesday morning.

On Tuesday, 9:00-9:55am, join me and Scott Finley, Manager - Advanced Analytics at Cardinal Health, for a look at Forecast Value Added (FVA) analysis. From our abstract:

Forecast Value Added (FVA) is a metric for evaluating the performance of each step and each participant in the forecasting process. FVA compares process performance to essentially “doing nothing”—telling you whether your efforts are adding value by making the forecast better, or whether you are just making things worse! This presentation provides an overview of the FVA approach, showing how to collect the data and analyze results. It includes a case study on how the Advanced Analytics group at Cardinal Health is using FVA analysis to evaluate and improve their forecasting process. You will learn:

  • What data is needed and how to calculate the forecast value added metric
  • How to use FVA to evaluate each step of process performance, identify non-value adding activities, and eliminate process waste
  • How Cardinal Health is using FVA analysis to evaluate their forecasting efforts and guide the evolution of their forecasting process

 Our Gifts to You

As an event sponsor, SAS will be showing SAS Forecasting for Desktop, and will announce the release of a new module in the Demand-Driven Forecasting component of our Supply Chain Intelligence suite of offerings. My colleagues Charlie Chase and Ed Katz will demonstrate the new module on Tuesday 10:00-10:30am.

Business Forecasting Deal book coverDemand-Driven Forecasting book coverEarly risers and the terminally hungry should stop by the SAS booth during Tuesday's breakfast hour (7:00-8:00am) to score a signed copy of The Business Forecasting Deal (the book). Then attend the demo session at 10:00 where Charlie will be signing copies of his latest book, Demand-Driven Forecasting (2nd Edition).

You'll have plenty of good reading for your flight home.

Post a Comment
  • About this blog

    Michael Gilliland is a longtime business forecasting practitioner and currently Product Marketing Manager for SAS Forecasting. He initiated The Business Forecasting Deal to help expose the seamy underbelly of the forecasting practice, and to provide practical solutions to its most vexing problems.
  • Subscribe to this blog

    Enter your email address:

    Other subscription options

  • Archives