Offensive vs. defensive forecasting

Sports provide us with many familiar clichés about playing defense, such as:

  • Defense wins championships.
  • The best defense is a good offense.

Or my favorite:

  • The best defense is the one that ranks first statistically in overall defensive performance, after controlling for the quality of the offenses it has faced.

Perhaps not the sort of thing you hear from noted scholars of the game like Charles Barkley, Dickie V, or the multiply-concussed crew of Fox NFL announcers. But it captures the essential fact that performance evaluation, when done in isolation, may lead to improper conclusions. (A team that plays a weak schedule should have better defensive statistics than one that plays only against championship caliber teams.)

Likewise, when we evaluate forecasting performance, we can't look simply at the MAPE (or other traditional metric) that is being used. We have to look at the difficulty of the forecasting task, and judge performance relative to the difficulty.

Offensive Forecasting

It is possible to characterize forecasting efforts as either offensive or defensive.

Offensive efforts are the things we do to extract every last bit of accuracy we can hope to achieve. This includes gathering more data, building more sophisticated models, and incorporating more human inputs into the process.

Doing these things will certainly add cost to the forecasting process. The hope is that they will make the forecast more accurate and less biased. (Just be aware, by a curious quirk of nature, that complexity may be contrary to improved accuracy, as the forthcoming Green & Armstrong article "Simple versus complex forecasting: The evidence" discusses.)

Heroic efforts may be justified for important, high-value forecasts, that have significant impact on overall company success. But for most things we forecast, it is sufficient to come up with a number that is "good enough" to make a planning decision. An extra percentage point or two of forecast accuracy -- even if it could be achieved -- just isn't worth the effort.

Defensive Forecasting

A defensive forecaster is not so much concerned with how good a forecast can be, but rather, with avoiding how bad a forecast can be.

Defensive forecasters recognize that most organizations fail to achieve the best possible forecasts. And many organizations actually forecast worse than doing nothing (and instead  just using the latest observation as the forecast). As Steve Morlidge reported in Foresight, 52% of the forecasts in his study sample failed to improve upon the naïve model. So more than half the time, these organizations were spending resources just to make the forecast worse.

The defensive forecaster can use FVA analysis to identify those forecast process steps that are failing to improve the forecast. The primary objective is to weed out wasted efforts, to stop making the forecast worse, and to forecast at least as well as the naïve model.

Once the organization is forecasting at least as well as the naïve model, then it is time to hand matters back over to the offensive forecasters -- to extract every last percent of accuracy that is possible.

 

 

Post a Comment

FVA interview with Shaun Snapp

Shaun Snapp

Shaun Snapp

The Institute of Business Forecasting's FVA blog series continued in January, with my interview of Shaun Snapp, founder and editor of SCM Focus.

Some of Shaun's answers surprised me, for example, that he doesn't compare performance to a naïve model (which I see as the  most fundamental FVA comparison). But he went on to explain that his consulting mainly involves software implementation and tuning (often with SAP). His work stops once they system is up and generating a forecast, so he is generally not involved directly with the planners or the forecasting process.

Shaun notes that most of the companies he works with don't rely on the statistical forecast generated by their software -- that planners have free reign to adjust the forecasts. And yet, because it takes effort to track the value of their forecast adjustments, it doesn't get done (planners are too busy making all their adjustments to step back and measure their own impact!).

He also notes that most of his work is focused on technical system issues -- he's found little demand for forecast input testing or other FVA-related services he can provide.

Discouragingly, he states he's never found a forecasting group that based its design on FVA. While clients may be receptive to the basic idea -- of applying a scientific approach to evaluating forecasting performance -- there are actually some groups where FVA is contrary to their interests. He gives the example of a sales group whose main interest is for the company to maintain an in-stock position on all their items. They have a lot of power within the company, and can achieve their objective by biasing the forecast high, thus forcing the supply chain to maintain excess inventory.

Read the full Shaun Snapp interview, and others in the FVA series, at www.demand-planning.com.

Coming in February: Interview with Steve Morlidge of CatchBull

Steve has been subject of several previous BFD blog posts, exploring his groundbreaking work on the "avoidability" of forecast error (Part 1 of 4), forecast quality in the supply chain (Part 1 of 2), and a Q&A on his research (Part 1 of 4). He also delivered a Foresight/SAS Webinar "Avoidability of Forecast Error" that is available for on-demand review. Check the IBF blog site later this month for Steve's FVA interview.

 

 

 

Post a Comment

Brilliant forecasting article from 1957!!! (Part 3)

This isn't such a brilliant article because we learn something new from it -- we really don't. But it is amazing to find, from someone in 1957, such a clear discussion of forecasting issues that still plague us today. If you can get past some of the Mad Men era words and phrasing, the article is wonderfully written and a fun read -- full of sarcastic digs at the forecasting practice.

In this final installment we'll look at Lorie's handling of forecasting performance evaluation.

Problem 2: The Evaluation of Forecasts

Lorie states there are two main problems in evaluating forecasting:

  1. Determining accuracy.
  2. Determining economic usefulness.

To solve these, he suggests three principles:

A. The Superiority of Written Forecasts

When forecasts are not recorded, the usual consequence is that they "seem to become more and more accurate as they recede into the past where memory is inexact and usually comforting." But even when written down, there is danger of ambiguity.

Lorie takes special aim at financial analysts and economic forecasters, who find it "distressingly easy" to use broad designations like "markets" or "business activity" or "sales." Of course, without a rigorous operational definition of such terms, the accuracy of the forecasts cannot be judged. "Their usefulness, however, can; their usefulness is negligible."

Lorie's position is largely in line with Nate Silver's recent critique of economic forecasting as an "almost complete failure."

In addition to recording forecasts in a way specific enough to be measured (typically product, location, time period, units), Lorie argues for recording the method used to generate the forecast:

The absence of a record of the forecasting method makes it extremely difficult to judge what has been successful and what unsuccessful among the techniques for peering into the future.

By method, I will interpret this to mean, at a high level, what forecasting process was used. For example,

STATISTICAL FORECAST ==> ANALYST OVERRIDE ==> CONSENSUS OVERRIDE

Over time we can determine whether these individual steps are making the forecast any better (or worse) than using a simple naïve model.

B. The Statistical Evaluation of Forecasting Techniques

Today there is growing recognition that relative metrics of forecasting performance are much more relevant and useful than the traditional accuracy or error metric by itself.

For example, to be told "MAPE=30%" is only mildly interesting. By itself, MAPE gives no indication of how easy or difficult a series is to forecast. It doesn't tell us what error would be reasonable to expect for the given series, and consequently, does not tell us whether our forecasting efforts were good or bad.

It is only by viewing the MAPE in comparison to some baseline of performance (e.g., the MAPE of a naïve forecast), that we can determine the "value added" by our forecasting efforts. This is what relative metrics such as FVA let you do.

Lorie gives an example: Each day the weather forecaster in St. Petersburg, Florida can forecast the following day's weather to be clear and sunny, and by doing nothing will be correct 95% of the time. The forecaster in Chicago, even using the latest technology and most sophisticated methods, will only be get the following day's forecast right 80% of the time. So does this mean the St. Petersburg forecaster is more skilled at his profession than the Chicago forecaster? Of course not!

If there is a point to the preceding example, it is that the statistical evaluation of forecasting techniques must take account of the variability of the series being forecast...the forecasting task in Chicago is much more difficult.

What is desired is measurement of the "marginal" contribution of the forecasting technique. What is desired is an indication of the extent to which one can forecast better because of the use of the forecasting technique than would be possible by sole reliance on some simple, cheap, and objective forecasting device.

Lorie has provided an almost perfect description of FVA analysis. In essence, it is nothing more than the application of basic scientific method to the evaluation of a forecasting process.

C. The Economic Evaluation of Forecasts

There can be asymmetry in the cost of our business decisions, that is clearly true. For example, it makes sense to carry excess inventory on an item which costs us little to make and hold, yet yields huge revenue when sold. (Carrying too little inventory might save us a little on cost, yet we'd miss a lot of revenue on lost sales.)

Lorie asserts:

A forecasting technique is to judged to be superior to alternatives according to an economic evaluation if the consequences of decisions based upon it are more profitable than decisions based upon the alternatives.

This seems to be saying that it is ok to bias your forecasts in a direction that is more economically favorable, but I disagree. While it is appropriate to bias your plans and actions in a way that will provide a more favorable economic outcome (as in the example above), I would contend that the forecast should remain an "unbiased best guess" at what is really going to happen in the future.

I'm not convinced there can be an economic evaluation of the forecast. (Evaluating a forecast solely on accuracy, bias, and FVA may be sufficient). However, there should be an economic evaluation of the decision that was made.

Post a Comment

Brilliant forecasting article from 1957!!! (Part 2)

Combining Statistical Analysis with Subjective Judgment (continued)

After summarily dismissing regression analysis and correlation analysis as panaceas for the business forecasting problem, Lorie turns next to "salesmen's forecasts."* He first echoes the assumption that we still hear today:

This technique of sales forecasting has much to commend it. It is based upon a systematic collection and analysis of the opinions of men who, among all the company's employees, are in closest contact with dealers and ultimate consumers.

But Lorie points out the "inherent deficiencies" of relying solely on sales force input, "for which it may be impossible to devise effective remedies." These are:

  • Unreasonably assumes that sales people have the "breadth and depth of understanding" of the pervasive influences on demand. (Do they have any skill at forecasting?)
  • Sales jobs turn over frequently, so sales people providing forecasts are often inexperienced, and we don't have enough data to determine their biases. (Will they give us an honest answer?)
  • Does not incorporate "competent statistical analysis" of historical sales data which could be combined with the sales force inputs.

Lorie also disses the use of consumer surveys as costly, impractical, and unproven to be of value except in limited circumstances.

Two Solutions

The message is not all negative. Lorie provides two solutions to combing statistics with judgment, the filter technique and the skeptic's technique. I'm not as much interested in the specific techniques as in his overall approach to the problem -- which in the filter technique is to focus on economy of process. Start "with an extremely simple and cheap process to which additional time and money are devoted only up to the point at which the process becomes satisfactory."

...the process provides an objective record of both sales forecasts and the methods by which they are made so that study of this record can be a means for continual improvement in the forecasting process.

(You can find details about the filter technique in the article.)

The skeptic's technique applies process control ideas, akin to Joseph & Finney's "Using Process Behaviour Charts to Improve Forecasting and Decision-Making" (Foresight 31 (Fall 2013), pp. 41-48). Starting with "limited faith" in the persistence of historical forces that affect sales:

  • Project future sales with a simple trend line.
  • Compute two standard deviations on each side of the line to create a range within which future sales should fall the vast majority of time (if historical forces continue to work in the same way).

Lorie points out that this work could be done by statistical clerks "whose rate of pay is substantially less than that of barbers or plumbers."

  • The forecaster then solicits company experts (who, "incidentally, usually receive substantially more than barbers or even plumbers").
  • If the expert's forecasts falls within the range limits of the statistical forecast, it is accepted. If outside the limits, even after reconsideration (asking "the gods for another omen"), the forecaster has to make a decision what to do.

Lorie wryly points out that making a decision is something the forecaster has avoided up to this point.

For expert forecasts outside the statistical forecast limits, Lorie states:

...experience has indicated that the forecast in a vast majority of cases would have been more accurate if the experts' forecast had arbitrarily been moved to the nearest control limit provided by the statistical clerk rather than being accepted as it was.

In Part 3 we'll look at Lorie's remarks on the evaluation of forecasts -- and his 1957 precursor to what we now call FVA!

---------------

*The role of the sales force in forecasting is subject of my recent Foresight article (Fall 2014), and a forthcoming presentation at the International Symposium on Forecasting (Riverside, CA, June 24-27).

Post a Comment

Guest blogger: Len Tashman previews Winter 2015 issue of Foresight

*** We interrupt discussion of James H. Lorie's 1957 article with this important announcement ***

Photo of Len Tashman

Len Tashman

Hot off the wire, here is editor Len Tashman's preview of the Winter 2015 issue of Foresight:

Foresight kicks off its 10th year with the publication of a new survey of business forecasters: Improving Forecast Quality in Practice. This ongoing survey, designed at the Lancaster Centre for Forecasting in the UK, seeks to gain insights on where the emphasis should be put to further upgrade the quality of our forecasting practices. Initial survey results, presented by Robert Fildes, Director of the Lancaster Centre, and Fotios Petropoulos, former member of the Centre, examine these key aspects of forecasting practice: organizational constraints, the flow of information, forecasting software, organizational resources, forecasting techniques employed, and the monitoring and evaluation of forecast accuracy.

The survey is an important update to that conducted more than a decade ago by Mark Moon, Tom Mentzer, and Carlo Smith of the University of Tennessee. In his Commentary on the Lancaster survey, Mark Moon applauds the broad focus of the survey but raises the issue of whether the "practicing forecasters” surveyed are “developers” or “customers” of the forecasts.

We often find a significant difference in perception between those who are responsible for creating a forecast and those that use the forecast to create business plans.

In our section on Collaborative Forecasting and Planning, Foresight S&OP Editor John Mello writes that S&OP can not only improve collaboration within an organization, but also “change the company’s operational culture from one that is internally focused to one that better understands the potential benefits of working with other companies in the supply chain.” His article, Internal and External Collaboration: The Keys to Demand-Supply Integration, identifies and compares several promising avenues of external collaboration, including vendor-managed inventory (VMI); collaborative planning, forecasting, and replenishment (CPFR); retail-event collaboration; and various stock-replenishment methods currently in use by major manufacturers and retailers. The critical factor, John finds, is trust:

These processes all require the sharing of information between companies, joint agreement on the responsibilities of the individual companies, and a good deal of trust between the parties, since the responsibility for integrating supply and demand is often delegated to the supplier.

In a Commentary on the Mello article, Ram Ganeshan and Tonya Boone point out that the challenges of external collaboration arrangements are much greater when we consider their Extension Beyond Fast-Moving Consumer Goods, especially those goods with short life cycles. For these products, they argue, a different mind-set is required to achieve demand-supply integration.

Financial Forecasting Editor Roy Batchelor distills the lessons forecasters should learn from the failures to predict and control our recent global financial meltdown. A 2014 International Monetary Fund (IMF) report, Financial Crises: Causes, Consequences, and Policy Responses, examined the world economies’ 2007-09 financial crises to establish their causes and impacts, as well as the initiatives governments and central banks undertook to deal with them. The overall impression from this report, Roy writes in his review, entitled Financial Crises and Forecasting Failures, is that the authorities could have been speedier and more imaginative in their interventions in the financial sector. However, it is important to note that our forecasting models could have given a clearer picture of how economies might emerge from these crises. Roy probes into why the models didn’t see the crisis coming, and what upgrades to the models’ financial sectors might improve predictive performance in the future.

Jeffrey Mishlove’s Commentary on Roy’s review article argues that the real problem did not emanate from predictive failures, but rather from the inclination toward austerity that pervaded economic thinking, especially in Western Europe. Jeff says that, while he can’t argue with Roy’s conclusions that refinements in the scientific method and the gathering of empirical data are appropriate responses to financial crises, forecasts will always be vulnerable to confounding influences from unanticipated variables – no matter how much we refine and improve our methodologies.

Seasonality – intra-year patterns that repeat year after year – is a dominant and pervasive contributor to variations in our economy. But, as Roy Pearson writes in Giving Due Respect to seasonality in Monthly Forecasting, the seasonal adjustments we make to economic data are poorly understood and lead to confusion in interpreting sales changes. Improved accounting for seasonality for monthly forecasts over 12-24 months can lead to better understanding of the forces behind sales forecasts, and very likely to some reduction in forecast errors.

 

Post a Comment

Brilliant forecasting article from 1957!!!

Brilliant, humorous, and obscure. Those words could describe two of my favorite comedians, Emo Philips* and the late Dennis Wolfberg.

They could also describe, with the addition of "exceedingly" brilliant, "scathingly" humorous, and "apparently totally" obscure, a 1957 article, "Two Important Problems in Sales Forecasting" by James H. Lorie (The Journal of Business Vol. 30, No. 3 (July 1957), pp. 172-179).

Lorie is not an unknown. When the article appeared, he was Associate Dean at the University of Chicago, School of Business. He is credited with creating the first database of stock exchange prices, allowing the type of stock analysis we take for granted today.

Yet according to Google Scholar, the article has been cited just 11 times (none since 1991), and never in any of the familiar forecasting journals or texts. I didn't find it last year while researching an article on the role of the sales force in forecasting (Foresight 35 (Fall 2014), pp. 8-13), and only came across it last week cited as a reference -- within a reference -- to Igor Gusakov's "Data-Cube Forecasting for the Forecasting Support System" (pp. 25-32 in the same issue of Foresight).

Problem 1: Combining Statistical Analysis with Subjective Judgment

Lorie first addresses the (still unresolved) challenge of "combining the wisdom of experienced businessmen with statistical analysis...in order to achieve better forecasts." (There is no mention of businesswomen, who apparently didn't exist until Peggy Olson on Mad Men.)

Lorie reviews and critiques the common statistical forecasting methods of the time: regression and correlation. (Recall that R.G. Brown's Exponential Smoothing for Predicting Demand published just the year before.) Of the former,

Perhaps a more fundamental objection to regression analysis as a means for forecasting is that it merely transforms the forecasting problem from the dependent variable to the independent variables. It requires that the analyst forecast the levels of the independent variables such as national income or industry sales rather than the level of the dependent variable, sales of a particular company's product. There is certainly very little reason to believe that forecasters have been markedly more successful in forecasting the kinds of variables which are typically considered to be independent in forecasting equations than they have been in forecasting the variables which are considered dependent.

And of the latter,

In spite of the grave limitations of correlation analysis, it will undoubtedly continue to be widely used. One of the reasons is that it is one of the very few techniques which can be readily learned by people receiving low wages and which has the comforting -- albeit superficial -- appearance of "scientific" precision.

Lorie also notes, as is now accepted in many quarters, that

...it is unreasonable to expect that more complicated massaging of numbers according to conventional statistical techniques is likely to produce very much more successful results in the future.

A similar sentiment appeared in my favorite forecasting article of the 21st century (Makridakis & Taleb, "Living in a World of Low Levels of Predictability," International Journal of Forecasting Vol. 25, No. 4 (Oct-Dec 2009), pp. 840-844):

  • Statistically sophisticated, or complex, models fit past data well, but do not necessarily predict the future accurately...
  • "Simple" models do not necessarily fit past data well, but predict the future better than complex or sophisticated statistical models.

We'll continue the Lorie synopsis in the next post...

------------

*Philips is not so obscure among learned forecasters, as he was quoted in a 2013 Foresight article by Roy Batchelor: "A computer once beat me at chess. But it was no match for me at kickboxing." However, I have yet to find academic citations for Wolfberg's "The Bris" or "The Rigid Sigmoidoscopy."

Post a Comment

ATM Replenishment: Forecasting + Optimization

Why do people steal ATMs? Because that's where the money is!!!

While the old "smash-n-grab" remains a favorite modus operandi of would-be ATM thieves, the biggest brains on the planet typically aren't engaged in such endeavors (see Thieves Steal Empty ATM, Chain Breaks Dragging Stolen ATM, An A for Effort).

And of course, as we learned in Breaking Bad, successfully stealing an ATM (but then insulting your crime partner), can have unfortunate mind-numbing consequences.

The ATM Replenishment Problem

Suppose you operate hundreds of ATMs, processing millions of customer transactions a month. You want to keep your customers happy (no out-of-cash or other down time situations), yet minimize the cost of restocking the machines.

It turns out that managing ATMs is even more difficult than stealing one, and this was the challenge faced by DBS Bank in Singapore.  With a network of 1100 ATMs, there is an ever-present threat of inconveniencing customers any time an ATM runs out of cash, or is otherwise out of service. Replenishment trips are costly (can you imagine the gas mileage on those armored trucks, even with oil under $50/barrel?). And when you reload an ATM that isn't running low on cash, you lose in two ways (wasting resources on an unnecessary trip, and temporarily making the ATM unavailable to customers while being reloaded.)

Fortunately there are bigger brains than the criminals thinking about the ATM replenishment problem. With the help of my colleagues from SAS Advanced Analytics R&D, DBS solved their problem and received top honors from the Singapore government for Most Innovative Use of Infocomm Technology. (See this write-up from Analytics magazine.)

Forecasting + Optimization

ATM replenishment is a perfect example of combining two areas of advanced analytics, forecasting and optimization. For DBS Bank, the first step was to understand withdrawal activity. Withdrawal rate is impacted by many factors, such as location, day of week, day of month, and time of day, and can be dramatically impacted by holidays or other special events.

Once you have a reasonably reliable forecast of customer activity at each ATM location, the next step (which helped DBS win the honors) is to convert the forecast into a daily execution plan for optimal reloading at just the right time. Since implementing the solution, DBS has been able to reduce cash-outs by 90%, reduce the number of customers impacted by the reloading process by 350,000 versus prior year, reduce the amount of returned cash (that was leftover in the ATM when it was reloaded) by 30%, and reduce the number of costly replenishment trips by 10%!

There are plenty of applications of forecasting + optimization outside ATM replenishment. For example, any company operating multiple production or distribution sites (or considering opening new ones) could benefit from a similar approach. First, get a good understanding of the timing and geographical location of customer demand. Then, optimize the placement of facilities or production lines. Revenue management, used by airlines and hotels to dynamically adjust pricing, is another example.

 

Post a Comment

FVA Interview with Jonathon Karelse

Jonathon Karelse

Jonathon Karelse

In December the Institute of Business Forecasting published the first of a new blog series on Forecast Value Added. Each month I will be interviewing an industry forecasting practitioner (or consultant/vendor) about their use of FVA analysis.

The December interview featured Jonathon Karelse, co-founder of NorthFind Partners. Among his key points:

  • Utilize metrics weighted by profit. With limited time and resources, this focuses your attention on actions that impact earnings.

You may not always have reliable data on margin / profit. But if you can trust the numbers, this is a great way to direct your improvement efforts to those products that make the most difference. (Extremely low volume / low revenue / low margin items may not be worth spending any effort on.)

  • Don't overlook measuring forecast bias.

Company forecasts are often overly optimistic. But Jonathon points out the situation where chronic supply shortages have led Sales forecasters to chronically under-forecast (not wanting their targets tied to numbers they don't trust can be built). This can potentially perpetuate the shortages.

  • Compare performance to a naïve model.

The traditional random walk may be considered too simplistic, so Jonathon suggests using a seasonal random walk, simple exponential smoothing, or a moving average. While I suggest always utilizing the random walk as the ultimate point of comparison, I agree that other extremely simple models are appropriate to use for comparison (and they often forecast reasonably well). Early in my career, in a very stable low-growth business, we compared our forecasts to a 52-week moving average.

  • The FVA metric resonates with management.

FVA is easy to understand, and can be a key metric for root cause analysis and corrective action.

Jonathon uses a deseasonalized CV to reduce the risk of false positives. (While high CV generally implies lower forecastability, a highly seasonal item will have high CV but can be quite forecastable if its patterns are consistent and repeating.)

  • Discourage arbitrary performance goals (such as MAPE < 20%).

Focus on improvement. (Here's how to set performance objectives.)

Find the full interview at www.demand-planning.com, and don't miss the money quote:

FVA is easy! If you aren’t using it, you are missing a critical indicator of your organization’s forecasting performance.

Meet Jonathon Karelse at IBF Conference

You can meet Jonathon in person next month, February 22-24, at IBF's Supply Chain Forecasting Conference in Scottsdale, AZ. He will be co-presenting The Art and Science of Forecasting: When to Use Judgment and Forecast Value Add (FVA) Analysis with his client Finning South America.

Coming soon, IBF will post the January FVA interview with Shaun Snapp of SCM Focus.

Post a Comment

Forecasting research project ideas

There are some things every company should know about the nature of its business. Yet many organizations don't know these fundamentals -- either because they are short on resources, or their resources don't have the analytical skills to do the work.

Lancaster Centre for ForecastingThe summer research projects offered by the Lancaster Centre for Forecasting, offer a cost-effective way to get yourself some answers.

Project Ideas

If you haven't done these things already, here are a few of my personal favorite projects to get started:

  • Compare your last year of forecasting performance to a naïve model.

This is the start of any forecasting improvement endeavor -- find out how you are doing today. Don't compare your performance to industry benchmarks, those are irrelevant. Find out whether your process performs at least as well as a simple method, such as a random walk or moving average forecast. (And don't be surprised to learn you are forecasting worse!)

  • Evaluate the volatility of demand for your products or services.

The Coefficient of Variation is a crude and imperfect, yet still useful indicator of the "forecastability" of your demand patterns. Low CV implies that you should be able to forecast fairly accurately with simple methods. High CV implies that you probably can't expect to forecast as accurately -- although some high CV patterns (e.g. something with lots of seasonality but stable, repeating patterns) can be forecast well.

  • Create the "comet chart" relating volatility to forecast accuracy.

Get a visual summary of your forecasting challenges by seeing how volatility and forecast accuracy are related. Use this as motivation to find ways to reduce the volatility of demand patterns.

Map out your forecasting process, and review the last year of forecasts at each step of the process (e.g. statistical forecast, analyst override, consensus override, executive approved forecast). If, like many companies, you aren't recording the data at each step, then start doing so. Use FVA to determine which steps and participants in the forecasting process are tending to make it better. And eliminate those process steps that are just making it worse. (For more information, view the Foresight/SAS Webinar, "FVA: A Reality Check on Forecasting Practices."

  • Replicate Steve Morlidge's analyses of forecast quality.

In a series of articles published in Foresight, Morlidge defined the "avoidability" of forecast error, and illustrated the value of a RAE (relative absolute error) metric for evaluating performance. Read the Foresight articles, find discussion of Morlidge's methodology in  several earlier BFD posts (such as this one), and view his recording from the Foresight/SAS Webinar series, "Avoidability of Forecast Error".

Doing these will give you a good foundation on which to do further research...

Post a Comment

Do you have a forecasting research project?

Lancaster Centre for ForecastingThe Lancaster Centre for Forecasting is seeking Master Student Projects in Forecasting, Data Mining, or Analytics for summer of 2015.

Projects normally run from mid-May to mid-August, with reports issued a few weeks after. These projects are a cost efficient way for a company to carry out analytical work by Master of Science candidates who are formally trained in forecasting. Many of the students have additional skills in areas like marketing analytics, logistics and supply chain, operations research and optimization, and simulation.

Costs are GDP 2,900 (about $4,525 USD) for a single MSc student for four months, plus travel expenses. Discounts available for additional students, non-profits, and SMEs.

Students will normally work on-site at your organization, under the joint supervision of a project leader at your company, and a forecasting expert from the Forecasting Centre. For a well structured problem, the student may remain in Lancaster with no or only occasional site visits.

For more information, and to discuss potential topics, contact:

Dr. Sven Crone

Dr. Sven Crone

Dr. Sven F. Crone

Assistant Professor in Management Science (Lecturer) & Director, Lancaster Research Centre for Forecasting

Lancaster University Management School, Room A53a, Lancaster, LA1 4YX  | T: +44 (0)1524 5-92991 | F: +44 (0)1524 844885 | M: +49 (0)171 4910100 | W: www.lums.lancs.ac.uk/forecasting | E: s.crone@lancaster.ac.uk

Post a Comment
  • About this blog

    Michael Gilliland is a longtime business forecasting practitioner and currently Product Marketing Manager for SAS Forecasting. He initiated The Business Forecasting Deal to help expose the seamy underbelly of the forecasting practice, and to provide practical solutions to its most vexing problems.
  • Subscribe to this blog

    Enter your email address:

    Other subscription options

  • Archives