Weather forecasts: deterministic, probabilistic or both?

In 1965's Subterranean Homesick Blues, Bob Dylan taught us:

You don't need a weatherman / To know which way the wind blows

In 1972's You Don't Mess Around with Jim, Jim Croce taught us:

You don't spit into the wind

By combining these two teachings, one can logically conclude that:

You don't need a weatherman to tell you where not to spit

But the direction of expectoration is the least of my current worries. What I really want to know is, WTH does it mean when the weatherman says there is a 70% chance of rain?

Greg Fishel, WRAL-TV Chief Meteorologist to the Rescue

Greg Fishel

Greg Fishel & Fan Club

I am pleased to announce that Greg Fishel, local celebrity and weather forecaster extrordinaire, will be speaking at the Analytics2014 conference at the Bellagio in Las Vegas (October 20-21). Greg (shown here with several groupies during a recent visit to the SAS campus) will discuss "Weather Forecasts: Deterministic, Probabilistic or Both?" Here is his abstract:

The use of probabilities in weather forecasts has always been problematic, in that there are as many interpretations of how to use probabilistic forecasts as there are interpreters. However, deterministic forecasts carry their own set of baggage, in that they often overpromise and underdeliver when it comes to forecast information critical for planning by various users. In recent years, an ensemble approach to weather forecasting has been adopted by many in the field of meteorology. This presentation will explore the various ways in which this ensemble technique is being used, and discuss the pros and cons of an across-the-board implementation of this forecast philosophy with the general public.

Please join us in Las Vegas, where I hope Greg will answer that other troubling weather question, can I drive a convertible fast enough in a rainstorm to not get wet. (Note, Mythbusters determined this to be "Plausible but not recommended.")

 

 

Post a Comment

Guest blogger: Len Tashman previews Summer 2014 issue of Foresight

 

Foresight editor Len Tashman

Len Tashman

Our tradition from Foresight’s birth in 2005 has been to feature a particular topic of interest and value to practicing forecasters. These feature sections have covered a wide range of areas: the politics of forecasting, how and when to judgmentally adjust statistical forecasts, forecasting support systems, why we should (or shouldn’t) place our trust in the forecasts, and guiding principles for managing the forecasting process, to name a selection.

This 34th issue of Foresight presents the first of a two-part feature section on forecasting by aggregation, the use of product aggregates to generate forecasts and then reconcile the forecasts across the product hierarchy. Aggregation is commonly done for product and geographic hierarchies but less frequently implemented for temporal hierarchies, which depict each product’s history in data of different frequencies: daily, weekly, monthly, and so forth.

The entire section has been organized by Aris Syntetos, Foresight’s Supply Chain Forecasting Editor, who is interviewed as this issue's Forecaster in the Field.  In his introduction to the feature section, Aris writes that forecasting by aggregation can provide dramatic benefits to an organization, and that its merits need to be more fully recognized and supported by commercially available forecasting software.

The two articles in this section address temporal aggregation. In his own piece, Forecasting by Temporal Aggregation, Aris provides a primer on the key issues in deciding which data frequencies to forecast and how to aggregate/disaggregate from these to meet organizational requirements. Then, in Improving Forecasting via Multiple Temporal Aggregation, Fotios Petropoulos and Nikolaos Kourentzes offer a provocative new way to achieve reconciliation of daily through yearly forecasts while increasing the accuracy with which each frequency is forecast.

Also in this issue, we review two new books that will interest those who take the long-range view of forecasting, both of  its history and professionally: Fortune Tellers: The Story of America’s First Economic Forecasters by Walter A. Friedman, and In 100 Years: Leading Economists Predict the Future, edited by Ignacio Palacios-Huerta.

The first book, writes reviewer Ira Sohn, Foresight’s Editor for Long-Range Forecasting, provides a “historical overview of the pioneers of forecasting, of the economic environments in which they worked, and of the tool sets and methodologies they used to generate their forecasts.” These trailblazers include familiar names and some not so well known: Roger Babson, John Moody, Irving Fisher, C. J. Bullock, Warren Persons, Wesley Clair Mitchell, and, surprisingly, Herbert Hoover. Why these? According to Friedman, they were the first to envision the possibility that economic forecasting could be a field, or even a profession; that the systematic study of a vast range of statistical data could yield insights into future business conditions; and that a market existed in business and government for weekly economic forecasts.

For the second book, editor Palacios-Huerta invited some of the “best brains in economics” – three of them already awarded Nobel prizes – to speculate on the state of the world and material well-being in 2113. Here they address some big issues: how will population, climate, social and economic inequality, strife, work, and education change in the next 100 years, and what are our prospects for being better off then than we are now?

Our section on Forecasting Principles and Methods turns to Walt Disney Resorts’ revenue managers McKay Curtis and Frederick Zahrn for a primer on Forecasting for Revenue Management. The essential objective, they write, is to adjust prices and product/service availability to maximize firm revenue from a given set of resources. We see revenue management in operation most personally when we watch airline ticket-price movements and need to know hotel room availability. It is the forecasts that drive these systems, and the authors show how they are used in revenue management.

Our concluding article is the fourth and final piece in Steve Morlidge’s series of discussions on forecast quality, a term Steve defines as forecast accuracy in relation to the accuracy of naïve forecasts, which he measures by the Relative Absolute Error (RAE). The previous articles demonstrated realistic boundaries to the RAE: an RAE above 1.0 is a cause for change since the method employed is no more accurate than a naïve – no change from last period – forecast, while an RAE below 0.5 occurred very rarely and thus represented a practical lower limit to forecast error.

Steve now deals with the natural follow-up question of how we should be Using Relative Error Metrics to Improve Forecast Quality in the Supply Chain. What actions should the business take in response to particular values for the RAE? His protocols will help identify those items that form a “sweet spot” for efforts to upgrade forecasting performance.

Meet us in Columbus, Ohio in October

The lively learning environment, easy camaraderie among the presenters and delegates, and very practical program made last year's Foresight Practitioner Conference at  Ohio State University’s Fisher College of Business a great success. We're looking forward to this year's event, From S&OP to Demand-Supply Integration: Collaboration Across the Supply Chain. The FPC offers a unique blend of practitioner experience and scholarly research within a vendor-free environment — I hope we'll see you there!

Find more information on the program at www.forecasters.org/foresight/sop-2014-conference/. Registration is discounted $100 (to $1295) for Foresight subscribers and IIF members (use registration code 2014FORESIGHT when you register).

Post a Comment

Upcoming forecasting conferences

We're entering the busy season for forecasting events, and here is the current calendar:

Analytics2014 - Frankfurt

The European edition of Analytics2014 kicks off tomorrow in Frankfurt, Germany. Five hundred of the leading thinkers and doers in the analytics profession hook up for two full days of interaction and learning. I'll try to get reports on the several forecasting related presentations, including "The New Analytical Mindset in Forecasting" by Nestlé Iberia, and the latest on "Forecast Value Added and the Limits of Forecastability" by Steve Morlidge of CatchBull. (Steve's compelling work has been subject of several BFD postings over the last year.)

For those (like me) who were unable to travel to Germany for this event, the US edition of Analytics2014 is back in Las Vegas (October 20-21).

APICS & IBF Best of the Best S&OP Conference - Chicago

For Sales & Operations Planning junkies, next week (June 12-13) is the first of two upcoming S&OP-focused conferences.

Hosted by APICS and the Institute of Business Forecasting, Best of the Best speakers include two winners of IBF's Excellence in Business Forecasting & Planning award, Patrick Bower of Combe, and Alan Milliken of BASF. Also, Dr. Chaman Jain, editor of Journal of Business Forecasting, will speak on S&OP for new products.

IBF Webinar: Risk Mitigation and Demand Planning Segmentation Strategies

Even agoraphobics, shut-ins, those without travel budgets, and those under house arrest can get in on the forecasting knowledge fest, with this June 18 (11:00am EDT) IBF webinar by Eric Wilson of Tempur Sealy. Per the abstract:

To improve forecast accuracy, companies are constantly seeking demand planning practices and solutions that best utilize their planners' expertise. This presentation on product segmentation will provide the best use of segmentation using attributes and FVA to create differentiated demand planning approaches. The general purpose of segmenting planning strategies is to mitigate risk. The purpose of FVA is to minimize forecasting resources, while reaching accuracy goals. In this session, we will offer basic strategies and conceptual ideas that you may utilize in any industry. Plus, we will provide examples of how we applied these practices ourselves that led to reduction in inventory, while maintaining or improving service levels that our customers love.

International Symposium on Forecasting - Rotterdam

SAS is again a sponsor of the ISF (June 29 - July 2), which brings together the world's elites in forecasting research and practice.

For industry practitioners who may have shied away from the ISF for being too "academic" I would encourage you to reconsider. I felt the same way before attending my first ISF (San Antonio 2005), yet found a vibrant interest in real-life forecasting issues, and the opportunity to build relationships with academic researchers. There is also strong interest in practitioner presentations. So even if it's too late to enjoy Rotterdam, start planning now for ISF 2015 in Riverside, California.

Foresight Practitioner Conference - Columbus, OH

This year's second big S&OP event is "From S&OP to Demand-Supply Integration" (October 8-9 at Ohio State University). Limited to 100 attendees, with no vendors permitted to present or exhibit, it is hosted by Len Tashman, editor of Foresight: The International Journal of Applied Forecasting. The day-and-a-half conference concludes with a 90-minute panel discussion moderated by Len.

 

Post a Comment

A naive forecast is not necessarily bad

As we saw in Steve Morlidge's study of forecast quality in the supply chain (Part 1, Part 2), 52% of the forecasts in his sample were worse than a naive (random walk) forecast. This meant that over half the time, these companies would have been better off doing nothing and just using the naive model rather than using the forecast produced through their systems and forecasting process.

Of course, we don't know in advance whether the naive forecast will be more accurate, so this doesn't help with decision making in particular instances. But the findings provide further evidence that in the real-life practice of business forecasting, we are disturbingly mediocre.

In private correspondence with Steve last week, he brought out some important points that I failed to mention in my damnation of forecasting practice:

Very often what we find is that the non-value adding forecasts aren’t necessarily poor forecasts; in the conventional sense of the word at least…in fact they might have low MAPE. It is just that they are worse than the naïve…which is not the same.

As usual, it is obvious when you think about it…if you have a stable demand pattern it might be easy to construct a forecast with good MAPE numbers but it is actually very difficult to beat the naïve forecast because last period's actual is a good predictor. And if you manually intervene in the process in any way then you will almost certainly make things worse. Ironically it is the very messy forecast series which offer the greatest potential to add value through good forecasting.

Two conclusions to draw from this:

  1. Sometimes the naive model provides a perfectly appropriate forecast (so a naive forecast is not necessarily a "bad" forecast).
  2. A forecast that fails to beat a naive forecast is not necessarily a "bad" forecast.

Illustrate with a Comet Chart

By creating a "comet chart" (find instructions in The Accuracy vs. Volatility Scatterplot), you may be able to illustrate these observations with your own company data.

Comet Chart

Comet Chart

This comet chart displays Forecast Accuracy (here scaled 0-100%) on the vertical axis, and coefficient of variation (here truncated at 160%) on the horizontal axis, for the 5000 Item / DC combinations at a food manufacturer. The line represents the approximate accuracy of a very simple forecasting model (here a moving average) at each level of volatility.

As you would expect, forecast accuracy tends to be quite high when volatility of sales patterns is quite low, so even a very simple model (like a moving average or random walk) would perform well, and be difficult to beat. For more volatile sales patterns, there is more opportunity to "add value" to the forecast by judgmental overrides or more sophisticated models.

The comet chart is easy to construct and provides a new way to look at your company's forecasting challenge. Create yours today.

Post a Comment

Forecast quality in the supply chain (Part 2)

As we saw last time with Steve Morlidge's analysis of the M3 data, forecasts produced by experts under controlled conditions with no difficult-to-forecast series still failed to beat a naive forecast 30% of the time.

So how bad could it be for real-life practitioners forecasting real-life industrial data?

In two words: Pretty bad.

The New Study

Morlidge's nine sample datasets covered 17,500 products, over an average of 29 (weekly or monthly) periods. For these real-life practitioners forecasting real-life data, 52% of forecasts had RAEs above 1.0.

FIFTY TWO PERCENT

As he puts it, "This result distressingly suggests that, on average, a company's product forecasts do not improve upon naive projections."

Morlidge also found that only 5% of the 17,500 products had RAEs below 0.5, which he has posited as a reasonable estimate of the practical lower limit for forecast error.

The Implications

What are we to make of these findings, other than gnash our teeth and curse the day we ever got ourselves suckered into the forecasting profession? While Morlidge's approach continues to receive further vetting on a broader variety of datasets, he itemizes several immediate implications for the practical task of forecasting in the supply chain:

1. RAE of 0.5 is a reasonable approximation to the best forecast that can be achieved in practice.

2. Traditional metrics (e.g. MAPE) are not particularly helpful. They do not tell you whether the forecast has the potential to be improved. And a change in the metric may indicate a change in the volatility of the data, not so much a change in the level of performance.

3. Many forecasting methods add little value.

On the positive side, his findings show that there is significant opportunity for improvement in forecast quality. He found the weighted average RAE to be well above the lower bound for forecast error (RAE = 0.5). And roughly half of all forecasts were worse than the naive forecast -- error which should be avoidable.

Of course, we don't know in advance which forecasts will perform worse than the naive forecast. But by rigorous tracking of performance over time, we should be able to identify those that are problematic. And we should always track separately the "statistical forecast" (generated by the forecasting software) and the "final forecast" (after judgmental adjustments are made) -- a distinction that was not possible in the current study.

Morlidge concludes,

...it is likely that the easiest way to make significant improvement is by eliminating poor forecasting rather than trying to optimise good forecasting. (p.31)

[You'll find a similar sentiment in this classic The BFD post, "First, do no harm."]

Hear More at Analytics 2014 in Frankfurt

Join over 500 of your peers at Analytics 2014, June 4-5 in Frankfurt, Germany. Morlidge will be presenting on "Forecasting Value Added and the Limits of Forecastability." Among the 40 presentations and four keynotes, there will also be forecasting sessions on:

  • Forecasting at Telefonica Germany
  • Promotion Forecasting for a Belgian Food Retailer, Delhaize
  • The New Analytical Mindset in Forecasting: Nestle's Approach in Europe and a Case Study of Nestle Spain
  • Big Data Analytics in the Energy Market: Customer Profiles from Smart Meter Data
  • Shopping and Entertainment: All the Figures of Media Retail

In addition, my colleague Udo Sglavo will present "A New Face for SAS Analytical Clients" -- the forthcoming web interface for SAS Forecast Server.

(Full Agenda)

Post a Comment

Forecast quality in the supply chain (Part 1)

The Spring 2014 issue of Foresight includes Steve Morlidge's latest article on the topic of forecastability and forecasting performance. He reports on sample data obtained from eight business operating in consumer (B2C) and industrial (B2B) markets. Before we look at these new results, let's review his previous arguments:

1. All extrapolative (time-series) methods are based on the assumption that the signal embedded in the data pattern will continue into the future. These methods thus seek to identify the signal and extrapolate it into the future.

2. Invariably, however, a signal is obscured by noise. A “perfect” forecast will match the signal 100% but, by definition, cannot forecast noise. So if we understand the nature of the relationship between the signal and noise in the past, we should be able to determine the limits of forecastability.

3. The most common naive forecast uses the current period actual as the forecast of the next period. As such, the average forecast error from the naïve model captures the level of noise plus changes in the signal.

4. Thus the limit of forecastability can be expressed in terms of the ratio of the actual forecast error to the naïve forecast error. This ratio is generally termed a relative absolute error (RAE). I have also christened it the avoidability ratio, because it represents the portion of the noise in the data that is reduced by the forecasting method employed.

5. In the case of a perfectly flat signal -- that is, no trend or seasonality in the data -- the best forecast quality achievable is an RAE = 0.7. So unless the data have signals that can be captured, the best forecast accuracy achievable is a 30% reduction in noise from the naïve forecast.

6. An RAE =1.0 should represent the worst forecast quality standard, since it says that the method chosen performed less accurately than a naïve forecast. In this circumstance, it might make sense to replace the method chosen with a naïve forecasting procedure. (pp. 26-27)

The M3 Study

In his previous article (Foresight 32 (Winter 2014), 34-39) Morlidge applied this approach to a segment of data from the M3 forecasting competition that was most relevant to supply chain practitioners.

The M3 competition involved 24 forecasting methods from academics and software vendors, and the 334 time series that Morlidge analyzed included no difficult-to-forecast intermittent demand patterns or new products. Yet all of the 24 forecasting methods generated RAEs above 1.0 more than 30% of the time. So nearly 1/3 of the time their performance was worse than a naive model!

Morlidge concluded that the average performance of any forecasting method may be less important than the distribution of its actual performance. He also emphasized that,

...we do not yet have the capability to identify the potential for poor forecasting before the event. It is therefore critical that actual forecast performance be routinely and rigorously measured after the event, and remedial action taken when it becomes clear that the level of performance is below expectations. (p. 28)

 In the next installment we'll look at results from the new study.

 

 

Post a Comment

Guest blogger: Len Tashman previews Spring 2014 issue of Foresight

Here is editor Len Tashman's preview of the new Spring 2014 issue of Foresight. In particular note the new article by Steve Morlidge of CatchBull, reporting on an analysis of eight B2B and B2C companies, which we'll discuss in a separate post.

An organization’s collaboration in forecasting and planning has both internal and external components. While sales and operations planning (S&OP) has become the standard infrastructure for collaborative efforts within the firm, opportunities for collaboration among supply chain partners have emerged as potential win-wins for the companies involved.

Len Tashman Editor of Foresight

Len Tashman
Editor of Foresight

Our feature article examines the characteristics, benefits, and challenges of collaborative planning, forecasting, and replenishment (CPFR) among supply chain partners. Jeff Van-Deursen and John Mello’s Roadmap to Implementing CPFR recommends specific policies that permit two or more companies to “monitor, control, and facilitate the overall performance of a supply chain by achieving a smooth flow of product between firms.” CPFR partners, however, “must integrate their supply chains, sales  organizations, and marketing intelligence into cross-organizational sales and operations planning (S&OP) and budgeting processes.”

In his Commentary on the Roadmap, Ram Ganeshan emphasizes that “CPFR is not a silver bullet for improving forecasts, but rather a set of structured processes that improve communication and coordination between supply chain partners on matching product supply and demand.” The key challenges, however, involve ensuring data integrity, standardizing forecasts, revising transactional relationships, and a willingness to deal with some significant organizational changes that could cause discord among functional units.

In our section on Forecasting Intelligence, Ram Ganeshan returns as an author to describe how retailers can benefit using Clickstream Analysis for Forecasting Online Behavior. A clickstream is an online trail, a prospective customer’s sequence of keystrokes or mouse clicks made as they consider making a purchase on the Internet. Through capture and analysis of the clickstream, a customer can provide value to a seller in helping to understand the customer’s intentions and thus improve their purchasing experience – as well as the retailer’s profitability.

Continuing Foresight’s exclusive coverage of forecastability issues (that is, the upper and lower limits of forecast accuracy), Steve Morlidge analyzes item-level forecasts at eight companies, granular data notably missing from the well-known M-competitions of forecast accuracy. Steve’s findings are an eye-opener, suggesting clearly that companies are not typically extracting maximum benefit from their forecasting systems, and showing how forecasters can upgrade Forecast Quality in the Supply Chain.

In 1981, the respected economist Julian Simon challenged the equally respected ecologist Paul Ehrlich to a wager over whether the planet’s future would entail mass starvation as population outpaced our productive resources (Ehrlich), or see the fruits of technological progress and human ingenuity sustain and improve our lives with ever-increasing abundance (Simon). Foresight Long-Range Forecasting Editor Ira Sohn expands upon Paul Sabin’s recent book The Bet to describe “the scholarly wager of the decade” and how it mirrored at the time a larger national debate between future-thinking optimists and pessimists.

We conclude this issue with a pair of book reviews on two volumes of a very different nature. Predictive Business Analytics by Lawrence Maisel and Gary Cokins seeks to show how companies can improve their usage of analytical tools. Reviewer McKay Curtis offers a blunt assessment of the book’s value to experienced analysts.

The Map and the Territory is former Federal Reserve Board chairman Alan Greenspan’s reflection on our most recent economic upheaval (which is normally dated to begin following the end of his term as chairman). Reviewer Geoff Allen tells us: “What you will learn from this book are some nice, personal Greenspan stories and a lot of data analysis to support a particular Greenspan viewpoint. Whether you will be persuaded to that viewpoint based on the evidence presented is another matter.”

Post a Comment

Q&A with Steve Morlidge of CatchBull (Part 4)

Q: ­How would you set the target for demand planners: all products at 0.7? All at practical limit (0.5)?­

A: In principle, forecasts are capable of being brought to the practical limit of an RAE of 0.5.

Whether it is sensible to attempt to do this for all products irrespective of the amount of effort and resources involved in achieving it is another matter. It would be much more sensible to set aspirations based upon considerations such as the business benefit of making an improvement, which may be based on the size of the products, the perceived scope for improvement or strategic considerations such as the importance of a set of products in a portfolio.

Target setting also has a large psychological dimension which needs to be taken into account. For example there is a lot of evidence that unrealistic targets unilaterally imposed on people can be demotivating whereas where individuals are allowed to set their own targets they are usually more stretching than those they are given to them by others.

One approach which avoids some of the pitfalls associated with traditional target setting is to strive for continuous improvement (where the target is in effect to beat past performance) in tandem with benchmarking, whereby peer pressure and the transfer of knowledge and best practice drives performance forward.

Q: ­Does the thinking change when you are forecasting multiple periods forward?­

A: In some respects the approach does not change. The limits of what can be achieved are still the same since it is given by the level of the noise in a data series compared to the level and nature of change in the signal.

Of course, the further out one forecasts the more inaccurate forecasts are likely to be because there is more opportunity for the signal to change in ways that cannot be anticipated. So while the theoretical lower bound for forecast error doesn’t change, the practical difficulties in achieving it get larger, so we would expect RAE to deteriorate the further ahead we forecast.

There is also an impact of the upper bound. With ‘one period ahead’ forecasts there is no reason why performance should be worse than the ‘same as last period’ naïve forecast (RAE=1.0). This doesn’t apply when one is forecasting more than one period ahead; the default stance of ‘use the latest actuals’ means that with a lag of n periods the upper limit should be the n period previous actual. In practice this will usually result in a maximum acceptable RAE of more than 1.0. The exact upper limit will need to be calculated by comparing the 1 period ahead naïve forecast error with the n period ahead number. The greater the trend in the data the larger this difference will be.

Q: ­Can you comment on the number of observations you would use to estimate RAE?

A: As with any measure, the more data points you have the more representative (and therefore reliable) the number. On the other hand, RAE can change over time and if you have a large amount of historical data which you then average these important shifts in performance will be lost.

As a rule of thumb I would be uncomfortable taking any significant decisions (to change forecast methods, etc.) with less than 6 data points.

Q: ­Have you found that more people inputting into the forecast adds value or destroys value?  For example, inputs from several levels of sales such as account managers, country managers, demand planners, executives, etc. ­

A: I do not have enough evidence to draw any hard and fast conclusions but I would be surprised if the answer to this question was not ‘it depends’. Some interventions made by some people at some times will be helpful; others made by other people at other times will be unhelpful.

The key to answering this question is to measure the contribution using Forecast Value Added. In addition, interventions are more likely to add value when the data series is very volatile particularly if it is driven by market place activity which is capable of being judgementally estimated reliably. If a data series is very stable, the opportunity to improve matters tends to be eclipsed by the risk of making things worse.

Post a Comment

Q&A with Steve Morlidge of CatchBull (Part 3)

Q: ­How important is it to recognize real trend change in noisy data?­

A: It is very important. In fact the job of any forecast algorithm is to predict the signal – whether it is trending or not – and to ignore the noise.

Unfortuantely this is not easy to do because the trend can be unstable and the noise confuses the situation. In fact one of the most common problems with forecasting algorithms is that they ‘overfit’; the data which means that they mistake noise for a signal.

This is the reason why experts recommend that you do not use R2 or other ‘in sample’ fitting statistics as a way of selecting which algorithm to use. The only way to know whether you have made the right is choice is by tracking performance after the event, ideally using statistic like RAE which allows for forecastability and helps you make meaningful comparisons and judgements about the level of forecast quality.

Q: ­From what you've seen, does RAE tend to be stable over time?­

A: No it does not.

In my experience performance can fluctuate over time. This might be the result of a change to the behaviour of the data series but the most common cause in my experience is a change in the quality of judgemental interventions. In particular I commonly see changes to the level of bias – systematic over or underforecasting.

A simple average will not pick this up however. You need to plot a moving average of RAE or even better an exponentially smoothed average as this takes in all available data but gives more weighting to the most recent.

Q: ­All of this assumes that there are no outside attributes available upon which to base a forecast. This all applies then only to situation in which the forecaster has only past data on the item itself to be forecasted. Is this correct?­

A: The main technique used by supply chain forecasters is time series forecasting whereby an algorithm is used to try to identify the signal in demand history which is then extrapolated into he future – on the assumption that the pattern will continue. The implicit assumption here is that the forecast (the dependent variable) is a product of the signal (the independent variable) and this there is only one this approach is termed ‘univariate’.

There are other types of forecasting which use other variables in addition to or instead of the history of the time series. These are known as ‘multivariate’ techniques since there is more than one dependant variable.

Irrespective of the approach used, the limits of forecastability still apply. There is no good reason why any forecasting method should consistently fail to beat the naïve forecast (have an RAE in excess of 1.0). The limits of what is forecastable is a product of the level of noise (which can’t be forecast by any method) compared to the level and nature of change in the signal.

Different techniques – univariate or multivariate – will be more or less successful in forecasting the signal but the same constraints apply to all.

Post a Comment

Q&A with Steve Morlidge of CatchBull (Part 2)

Q: ­Do you think the forecaster should distribute forecast accuracy to stakeholders (e.g. to show how good/bad the forecast is) or do you think this will confuse stakeholders?

A: This just depends what is meant by stakeholders. And what is meant by forecast accuracy.

If stakeholders means those people who contribute to the forecast process by providing the market intelligence that drive the judgemental adjustments made to the statistical forecast the answer is a resounding ‘yes’…at least in principle. Many forecasts are plagued with bias and a common source of bias infection is overly pessimistic or (more commonly) optimistic input from those supplying ‘market intelligence’.

Also those responsible for making the decision to investment in forecasting processes and software need to know what kind of return it has generated.

But all too often impenetrable and meaningless statistics are foisted on stakeholders using measures that are difficult for laymen to interpret and provide no indication of whether the result is good or bad.

This is why I strongly recommend using a measure such as RAE which, by clearly identifying whether and where a forecast process has added value, is easy to understand and meaningful from a business perspective.

Q: ­When you say RAE needs to be calculated at the lowest level do you mean by item, or even lower such as by item shipped by plant X to customer Y?­

A: Forecasting demand for the Supply Chain, and replenishing stock based on this forecast, is only economically worthwhile if it is possible to improve on the simple strategy of holding a defined buffer (safety) stock and replenishing it to make good any withdrawals in the period.

What implications does this have for error measurement?

First, since this simple replenishment strategy is arithmetically equivalent to using a naïve forecast (assuming no stock outs), and the level of safety stock needed to meet a given service level is determined by the level of errors (all other things being equal), if a forecast has a RAE below 1.0 it means that the business needs to hold less stock.

The second is the level of granularity at which error should be measured. Since the goal is to have the right amount of stock at the right place at the right time then error should be measured at a location/unique stock items level in buckets which (as far as possible) match the frequency at which stock is replenished. Measuring error across all locations will understate effective forecast error since having the right amount of stock in the wrong place is costly. And while it might be helpful to identify the source of error if different customers are supplied from the same stock, measuring error at a customer level will overstate effective error.

Q: ­What are your thoughts on using another model for benchmarking forecast error besides the naive model?­

A: Relative Absolute Error (RAE) is a measure which compares the average absolute forecast error with that from a simple ‘same as last period’ naïve error. This approach has the advantage of simplicity and ease of interpretation. It is easy to calculate and, since the naïve forecast is the crudest forecasting method conceivable, then a failure to beat it is something that is very easy to understand – it is baaad!

But the naïve forecast is more than a mere benchmark.

The ultimate economic justification for forecasting is that it is more efficient than a simple replenishment strategy whereby stock is maintained at a constant level by making good the sales made in the prior period. A naïve forecasts is mathematically equivalent to this strategy the degree to which a forecast improves on it is a measure of how much value a forecast has added. So RAE where the naïve forecast provides the denominator in the equation is economically meaningful in a way that would not be possible if another method was chosen.

Secondly the naïve forecast error reflects the degree of period to period volatility. This means that it a good proxy measure for the forecastability of the data set and, given certain assumptions, it is possible to make theoretical inferences about the minimum level of forecast error. As a result a specific RAE provides an objective measure of how good a forecast really is in a way that is not possible if another forecast method was used to provide the denominator in the equation. In this case the result would say as much about the performance of the benchmark method as it does about the performance of the actual method…and it would be impossible to disentangle the impact of one form another.

Post a Comment
  • About this blog

    Michael Gilliland is a longtime business forecasting practitioner and currently Product Marketing Manager for SAS Forecasting. He initiated The Business Forecasting Deal to help expose the seamy underbelly of the forecasting practice, and to provide practical solutions to its most vexing problems.
  • Subscribe to this blog

    Enter your email address:

    Other subscription options

  • Archives