New! "Demand-Driven Forecasting" course from SAS

Charlie Chase

Charlie Chase

My colleague Charlie Chase, Advisory Industry Consultant and author of the book Demand-Driven Forecasting, has developed a new course for the SAS Business Knowledge Series (BKS): Best Practices in Demand-Driven Forecasting.

The 2-day course will be offered for the first time April 20-21 in Atlanta (and then again September 24-25 in Chicago). From the description:

Demand Forecasting continues to be one of the most sought-after objectives for improving supply-chain management across all industries around the world. Over the past two decades companies have ignored demand forecasting as they have chosen to improve upstream efficiencies related to supply-driven planning activities. Due to recent economic conditions, globalization, and market dynamics, companies are realizing that demand forecasting and planning affects almost every aspect of the supply chain. Also, companies have learned that focusing exclusively on supply is a receipt for failure. As a result, there is a renewed focus on improving the accuracy of their demand response. This course provides a structured framework to transition companies from being supply-driven to becoming demand-driven with an emphasis on customer excellence, rather than operation excellence.

Attendees receive a complimentary copy of Demand-Driven Forecasting.

Cover of Demand-Driven ForecastingStick around after the Atlanta course to attend the IBF conference on Predictive Business Analytics, where Charlie will be co-presenting with Chad Schumacher of Kellogg's on the topic of Using Multitiered Causal Analysis. (Meantime, download the whitepaper Using Multitiered Causal Analysis to Improve Demand Forecasts and Optimize Marketing Strategy).

Also, be on the lookout for Charlie's new blog which will be starting soon. (A link will be provided when available.)

Find more information about the course in Maggie Miller's interview, "6 Questions with Forecasting Expert Charlie Chase."

More Learning Options from the SAS Business Knowledge Series

A recent BFD post characterized "Offensive vs. Defensive Forecasting," and Charlie's course covers the offensive side. But the BKS also covers the defensive side of forecasting in the course Forecast Value Added Analysis that will be offered over the web on the afternoons of May 7-8, and again September 28-29.

My colleague Chip Wells (co-author of Applied Data Mining for Forecasting Using SAS) has expanded my 3-hour FVA workshop into two half-day sessions, providing additional content, exercises, and examples. Chip brings a strong statistical and data mining background to the topic of FVA, as well as considerable experience teaching and consulting with SAS forecasting customers. For anyone seeking to expand their knowledge of FVA analysis, this is a great way to do so from the comfort of your office. (Get started by downloading the whitepaper Forecast Value Added Analysis: Step-by-Step.)

 

 

 

Post a Comment

SAS analytics and forecasting news

♦We learned this week that SAS is ranked #4 on Fortune's 100 Best Companies to Work For in 2015. This makes six straight years ranking in the top four (including twice at #1).

Analytics Magazine Cover♦The March/April 2015 issue of Analytics Magazine includes a SAS company profile by my colleague Kathy Lange. As a publication of INFORMS (the Institute for Operations Research and the Management Sciences), the profile focuses on SAS offerings  in advanced analytics and operations research (including optimization and discrete event simulation).

IBF Flyer♦SAS exhibited at last month's Institute of Business Forecasting Supply Chain Forecasting Conference.

♦In April, my colleague Charlie Chase will be co-presenting (with Chad Schumacher of Kellogg's) at IBF's first Predictive Business Analytics Forecasting & Planning Conference (Atlanta, April 22-24). Their topic is "Using Multi-Tiered Causal Analysis to Synchonize Demand and Supply." Charlie has written extensively about MTCA in his book Demand-Driven Forecasting, and you can download the SAS whitepaper "Using MTCA to Improve Demand Forecasts and Optimize Marketing Strategy."

♦SAS Research & Development has been busy as usual, and in 2014 was awarded five forecasting-related patents:

  • Computer-implemented systems and methods for flexible definition of time intervals

Inventors: Tammy Jackson, Michael Leonard, Keith Crowe

  • Attribute based hierarchy management for estimation and forecasting

Inventors: Burak Meric, Alex Chien, Thomas Burkhardt

  •  Systems and methods for propagating changes in a demand planning hierarchy

Inventor: Vic Richard

  • System and methods for retail forecasting utilizing forecast model accuracy criteria, holdout samples and marketing mix data

Inventors: Alex Chien and Yongqiao Xiao

  • Computer-implemented systems and methods for forecasting and estimation using grid regression

Inventor: Vijay S. Desai

Post a Comment

Don't fine tune your forecast!

Does your forecast look like a radio? No? Then don't treat it like one.

Image of RadioA radio's tuning knob serves a valid purpose. It lets you make fine adjustments, improving reception of the incoming signal, resulting in a clearer and more enjoyable listening experience.

But just because you can make fine adjustments to your forecast, doesn't mean you should. In fact, you shouldn't.

Two Things Can Happen -- And One of Them is Bad

Famed college football coach Woody Hayes (fired unceremoniously in 1978 for punching an opposing player) was known for powerful teams that ran the ball, eschewing the forward pass. Of the latter, he is credited with saying "When you pass the ball three things can happen, and two of them are bad." [For those unfamiliar with American football, the good thing is a pass completion, and the bad things are an incompletion or an interception by the opposing team.]

Whenever you adjust a forecast two things can happen -- you can improve the accuracy of the forecast, or make it worse.

Obviously, if you make the adjustment in the wrong direction (e.g., lowering the forecast when actuals turn out to be higher), a bad thing has happened -- you've made the forecast worse. But you can also make overly aggressive adjustments in the right direction and overshoot, making the forecast worse. (For example, initial forecast of 100, adjusted forecast of 110, actual turns out to be 104.)

When you make just a small adjustment, there is little chance of overshooting. So as long as you are directionally correct, you have improved the forecast. But even if we assume every small adjustment is directionally correct, is that reason enough to spend time making small adjustments?

No. And here's why not:

First recognize that "small adjustment" means small as a percentage of the original forecast. So changing a forecast from 100 to 101 is a "small" adjustment, just 1%. Likewise, changing 1,256,315 to 1,250,000 would be considered a small adjustment (0.5%) even though the change is over 6300 units.

Another way to characterize adjustments is their relevance -- whether they are significant enough to cause changes in decisions and plans.

On this criterion, small adjustments are mostly irrelevant. An organization is probably not going to grind to a halt, scuttle existing plans, and suddenly change direction just because of a 1% adjustment in a forecast.

[Note that even "large" forecast adjustments may be irrelevant, when they don't require any change in plans. This could happen for very low value items, such as 1/4" galvanized washers sold at a hardware store. Such items are usually managed via simple replenishment rules, like a two-bin inventory control system. Unless the forecast change is so large that current bin sizes are deemed inappropriate, no action will be taken.]

Can't Small Adjustments Make a Big Improvement in Accuracy?

It's true that even a small adjustment can make a big improvement in forecast accuracy. Changing the forecast from 100 to 101, when actuals turn out to be 102, means you cut the error in half! (On the other hand, if actuals turned out to be 200, then you only reduced forecast error by 1%.)

But the purpose of forecasting is to help managers make better decisions, devise better plans, and run a more effective and profitable organization. Improved forecast accuracy, in itself, has no value unless it results in improved organizational performance.

So if a small forecast adjustment does not change any of the behavior (or resulting outcomes) of the organization -- why bother??? Making small adjustments takes effort and resources, but is simply a waste of time.

Post a Comment

Offensive vs. defensive forecasting

Sports provide us with many familiar clichés about playing defense, such as:

  • Defense wins championships.
  • The best defense is a good offense.

Or my favorite:

  • The best defense is the one that ranks first statistically in overall defensive performance, after controlling for the quality of the offenses it has faced.

Perhaps not the sort of thing you hear from noted scholars of the game like Charles Barkley, Dickie V, or the multiply-concussed crew of Fox NFL announcers. But it captures the essential fact that performance evaluation, when done in isolation, may lead to improper conclusions. (A team that plays a weak schedule should have better defensive statistics than one that plays only against championship caliber teams.)

Likewise, when we evaluate forecasting performance, we can't look simply at the MAPE (or other traditional metric) that is being used. We have to look at the difficulty of the forecasting task, and judge performance relative to the difficulty.

Offensive Forecasting

It is possible to characterize forecasting efforts as either offensive or defensive.

Offensive efforts are the things we do to extract every last bit of accuracy we can hope to achieve. This includes gathering more data, building more sophisticated models, and incorporating more human inputs into the process.

Doing these things will certainly add cost to the forecasting process. The hope is that they will make the forecast more accurate and less biased. (Just be aware, by a curious quirk of nature, that complexity may be contrary to improved accuracy, as the forthcoming Green & Armstrong article "Simple versus complex forecasting: The evidence" discusses.)

Heroic efforts may be justified for important, high-value forecasts, that have significant impact on overall company success. But for most things we forecast, it is sufficient to come up with a number that is "good enough" to make a planning decision. An extra percentage point or two of forecast accuracy -- even if it could be achieved -- just isn't worth the effort.

Defensive Forecasting

A defensive forecaster is not so much concerned with how good a forecast can be, but rather, with avoiding how bad a forecast can be.

Defensive forecasters recognize that most organizations fail to achieve the best possible forecasts. And many organizations actually forecast worse than doing nothing (and instead  just using the latest observation as the forecast). As Steve Morlidge reported in Foresight, 52% of the forecasts in his study sample failed to improve upon the naïve model. So more than half the time, these organizations were spending resources just to make the forecast worse.

The defensive forecaster can use FVA analysis to identify those forecast process steps that are failing to improve the forecast. The primary objective is to weed out wasted efforts, to stop making the forecast worse, and to forecast at least as well as the naïve model.

Once the organization is forecasting at least as well as the naïve model, then it is time to hand matters back over to the offensive forecasters -- to extract every last percent of accuracy that is possible.

 

 

Post a Comment

FVA interview with Shaun Snapp

Shaun Snapp

Shaun Snapp

The Institute of Business Forecasting's FVA blog series continued in January, with my interview of Shaun Snapp, founder and editor of SCM Focus.

Some of Shaun's answers surprised me, for example, that he doesn't compare performance to a naïve model (which I see as the  most fundamental FVA comparison). But he went on to explain that his consulting mainly involves software implementation and tuning (often with SAP). His work stops once they system is up and generating a forecast, so he is generally not involved directly with the planners or the forecasting process.

Shaun notes that most of the companies he works with don't rely on the statistical forecast generated by their software -- that planners have free reign to adjust the forecasts. And yet, because it takes effort to track the value of their forecast adjustments, it doesn't get done (planners are too busy making all their adjustments to step back and measure their own impact!).

He also notes that most of his work is focused on technical system issues -- he's found little demand for forecast input testing or other FVA-related services he can provide.

Discouragingly, he states he's never found a forecasting group that based its design on FVA. While clients may be receptive to the basic idea -- of applying a scientific approach to evaluating forecasting performance -- there are actually some groups where FVA is contrary to their interests. He gives the example of a sales group whose main interest is for the company to maintain an in-stock position on all their items. They have a lot of power within the company, and can achieve their objective by biasing the forecast high, thus forcing the supply chain to maintain excess inventory.

Read the full Shaun Snapp interview, and others in the FVA series, at www.demand-planning.com.

Coming in February: Interview with Steve Morlidge of CatchBull

Steve has been subject of several previous BFD blog posts, exploring his groundbreaking work on the "avoidability" of forecast error (Part 1 of 4), forecast quality in the supply chain (Part 1 of 2), and a Q&A on his research (Part 1 of 4). He also delivered a Foresight/SAS Webinar "Avoidability of Forecast Error" that is available for on-demand review. Check the IBF blog site later this month for Steve's FVA interview.

 

 

 

Post a Comment

Brilliant forecasting article from 1957!!! (Part 3)

This isn't such a brilliant article because we learn something new from it -- we really don't. But it is amazing to find, from someone in 1957, such a clear discussion of forecasting issues that still plague us today. If you can get past some of the Mad Men era words and phrasing, the article is wonderfully written and a fun read -- full of sarcastic digs at the forecasting practice.

In this final installment we'll look at Lorie's handling of forecasting performance evaluation.

Problem 2: The Evaluation of Forecasts

Lorie states there are two main problems in evaluating forecasting:

  1. Determining accuracy.
  2. Determining economic usefulness.

To solve these, he suggests three principles:

A. The Superiority of Written Forecasts

When forecasts are not recorded, the usual consequence is that they "seem to become more and more accurate as they recede into the past where memory is inexact and usually comforting." But even when written down, there is danger of ambiguity.

Lorie takes special aim at financial analysts and economic forecasters, who find it "distressingly easy" to use broad designations like "markets" or "business activity" or "sales." Of course, without a rigorous operational definition of such terms, the accuracy of the forecasts cannot be judged. "Their usefulness, however, can; their usefulness is negligible."

Lorie's position is largely in line with Nate Silver's recent critique of economic forecasting as an "almost complete failure."

In addition to recording forecasts in a way specific enough to be measured (typically product, location, time period, units), Lorie argues for recording the method used to generate the forecast:

The absence of a record of the forecasting method makes it extremely difficult to judge what has been successful and what unsuccessful among the techniques for peering into the future.

By method, I will interpret this to mean, at a high level, what forecasting process was used. For example,

STATISTICAL FORECAST ==> ANALYST OVERRIDE ==> CONSENSUS OVERRIDE

Over time we can determine whether these individual steps are making the forecast any better (or worse) than using a simple naïve model.

B. The Statistical Evaluation of Forecasting Techniques

Today there is growing recognition that relative metrics of forecasting performance are much more relevant and useful than the traditional accuracy or error metric by itself.

For example, to be told "MAPE=30%" is only mildly interesting. By itself, MAPE gives no indication of how easy or difficult a series is to forecast. It doesn't tell us what error would be reasonable to expect for the given series, and consequently, does not tell us whether our forecasting efforts were good or bad.

It is only by viewing the MAPE in comparison to some baseline of performance (e.g., the MAPE of a naïve forecast), that we can determine the "value added" by our forecasting efforts. This is what relative metrics such as FVA let you do.

Lorie gives an example: Each day the weather forecaster in St. Petersburg, Florida can forecast the following day's weather to be clear and sunny, and by doing nothing will be correct 95% of the time. The forecaster in Chicago, even using the latest technology and most sophisticated methods, will only be get the following day's forecast right 80% of the time. So does this mean the St. Petersburg forecaster is more skilled at his profession than the Chicago forecaster? Of course not!

If there is a point to the preceding example, it is that the statistical evaluation of forecasting techniques must take account of the variability of the series being forecast...the forecasting task in Chicago is much more difficult.

What is desired is measurement of the "marginal" contribution of the forecasting technique. What is desired is an indication of the extent to which one can forecast better because of the use of the forecasting technique than would be possible by sole reliance on some simple, cheap, and objective forecasting device.

Lorie has provided an almost perfect description of FVA analysis. In essence, it is nothing more than the application of basic scientific method to the evaluation of a forecasting process.

C. The Economic Evaluation of Forecasts

There can be asymmetry in the cost of our business decisions, that is clearly true. For example, it makes sense to carry excess inventory on an item which costs us little to make and hold, yet yields huge revenue when sold. (Carrying too little inventory might save us a little on cost, yet we'd miss a lot of revenue on lost sales.)

Lorie asserts:

A forecasting technique is to judged to be superior to alternatives according to an economic evaluation if the consequences of decisions based upon it are more profitable than decisions based upon the alternatives.

This seems to be saying that it is ok to bias your forecasts in a direction that is more economically favorable, but I disagree. While it is appropriate to bias your plans and actions in a way that will provide a more favorable economic outcome (as in the example above), I would contend that the forecast should remain an "unbiased best guess" at what is really going to happen in the future.

I'm not convinced there can be an economic evaluation of the forecast. (Evaluating a forecast solely on accuracy, bias, and FVA may be sufficient). However, there should be an economic evaluation of the decision that was made.

Post a Comment

Brilliant forecasting article from 1957!!! (Part 2)

Combining Statistical Analysis with Subjective Judgment (continued)

After summarily dismissing regression analysis and correlation analysis as panaceas for the business forecasting problem, Lorie turns next to "salesmen's forecasts."* He first echoes the assumption that we still hear today:

This technique of sales forecasting has much to commend it. It is based upon a systematic collection and analysis of the opinions of men who, among all the company's employees, are in closest contact with dealers and ultimate consumers.

But Lorie points out the "inherent deficiencies" of relying solely on sales force input, "for which it may be impossible to devise effective remedies." These are:

  • Unreasonably assumes that sales people have the "breadth and depth of understanding" of the pervasive influences on demand. (Do they have any skill at forecasting?)
  • Sales jobs turn over frequently, so sales people providing forecasts are often inexperienced, and we don't have enough data to determine their biases. (Will they give us an honest answer?)
  • Does not incorporate "competent statistical analysis" of historical sales data which could be combined with the sales force inputs.

Lorie also disses the use of consumer surveys as costly, impractical, and unproven to be of value except in limited circumstances.

Two Solutions

The message is not all negative. Lorie provides two solutions to combing statistics with judgment, the filter technique and the skeptic's technique. I'm not as much interested in the specific techniques as in his overall approach to the problem -- which in the filter technique is to focus on economy of process. Start "with an extremely simple and cheap process to which additional time and money are devoted only up to the point at which the process becomes satisfactory."

...the process provides an objective record of both sales forecasts and the methods by which they are made so that study of this record can be a means for continual improvement in the forecasting process.

(You can find details about the filter technique in the article.)

The skeptic's technique applies process control ideas, akin to Joseph & Finney's "Using Process Behaviour Charts to Improve Forecasting and Decision-Making" (Foresight 31 (Fall 2013), pp. 41-48). Starting with "limited faith" in the persistence of historical forces that affect sales:

  • Project future sales with a simple trend line.
  • Compute two standard deviations on each side of the line to create a range within which future sales should fall the vast majority of time (if historical forces continue to work in the same way).

Lorie points out that this work could be done by statistical clerks "whose rate of pay is substantially less than that of barbers or plumbers."

  • The forecaster then solicits company experts (who, "incidentally, usually receive substantially more than barbers or even plumbers").
  • If the expert's forecasts falls within the range limits of the statistical forecast, it is accepted. If outside the limits, even after reconsideration (asking "the gods for another omen"), the forecaster has to make a decision what to do.

Lorie wryly points out that making a decision is something the forecaster has avoided up to this point.

For expert forecasts outside the statistical forecast limits, Lorie states:

...experience has indicated that the forecast in a vast majority of cases would have been more accurate if the experts' forecast had arbitrarily been moved to the nearest control limit provided by the statistical clerk rather than being accepted as it was.

In Part 3 we'll look at Lorie's remarks on the evaluation of forecasts -- and his 1957 precursor to what we now call FVA!

---------------

*The role of the sales force in forecasting is subject of my recent Foresight article (Fall 2014), and a forthcoming presentation at the International Symposium on Forecasting (Riverside, CA, June 24-27).

Post a Comment

Guest blogger: Len Tashman previews Winter 2015 issue of Foresight

*** We interrupt discussion of James H. Lorie's 1957 article with this important announcement ***

Photo of Len Tashman

Len Tashman

Hot off the wire, here is editor Len Tashman's preview of the Winter 2015 issue of Foresight:

Foresight kicks off its 10th year with the publication of a new survey of business forecasters: Improving Forecast Quality in Practice. This ongoing survey, designed at the Lancaster Centre for Forecasting in the UK, seeks to gain insights on where the emphasis should be put to further upgrade the quality of our forecasting practices. Initial survey results, presented by Robert Fildes, Director of the Lancaster Centre, and Fotios Petropoulos, former member of the Centre, examine these key aspects of forecasting practice: organizational constraints, the flow of information, forecasting software, organizational resources, forecasting techniques employed, and the monitoring and evaluation of forecast accuracy.

The survey is an important update to that conducted more than a decade ago by Mark Moon, Tom Mentzer, and Carlo Smith of the University of Tennessee. In his Commentary on the Lancaster survey, Mark Moon applauds the broad focus of the survey but raises the issue of whether the "practicing forecasters” surveyed are “developers” or “customers” of the forecasts.

We often find a significant difference in perception between those who are responsible for creating a forecast and those that use the forecast to create business plans.

In our section on Collaborative Forecasting and Planning, Foresight S&OP Editor John Mello writes that S&OP can not only improve collaboration within an organization, but also “change the company’s operational culture from one that is internally focused to one that better understands the potential benefits of working with other companies in the supply chain.” His article, Internal and External Collaboration: The Keys to Demand-Supply Integration, identifies and compares several promising avenues of external collaboration, including vendor-managed inventory (VMI); collaborative planning, forecasting, and replenishment (CPFR); retail-event collaboration; and various stock-replenishment methods currently in use by major manufacturers and retailers. The critical factor, John finds, is trust:

These processes all require the sharing of information between companies, joint agreement on the responsibilities of the individual companies, and a good deal of trust between the parties, since the responsibility for integrating supply and demand is often delegated to the supplier.

In a Commentary on the Mello article, Ram Ganeshan and Tonya Boone point out that the challenges of external collaboration arrangements are much greater when we consider their Extension Beyond Fast-Moving Consumer Goods, especially those goods with short life cycles. For these products, they argue, a different mind-set is required to achieve demand-supply integration.

Financial Forecasting Editor Roy Batchelor distills the lessons forecasters should learn from the failures to predict and control our recent global financial meltdown. A 2014 International Monetary Fund (IMF) report, Financial Crises: Causes, Consequences, and Policy Responses, examined the world economies’ 2007-09 financial crises to establish their causes and impacts, as well as the initiatives governments and central banks undertook to deal with them. The overall impression from this report, Roy writes in his review, entitled Financial Crises and Forecasting Failures, is that the authorities could have been speedier and more imaginative in their interventions in the financial sector. However, it is important to note that our forecasting models could have given a clearer picture of how economies might emerge from these crises. Roy probes into why the models didn’t see the crisis coming, and what upgrades to the models’ financial sectors might improve predictive performance in the future.

Jeffrey Mishlove’s Commentary on Roy’s review article argues that the real problem did not emanate from predictive failures, but rather from the inclination toward austerity that pervaded economic thinking, especially in Western Europe. Jeff says that, while he can’t argue with Roy’s conclusions that refinements in the scientific method and the gathering of empirical data are appropriate responses to financial crises, forecasts will always be vulnerable to confounding influences from unanticipated variables – no matter how much we refine and improve our methodologies.

Seasonality – intra-year patterns that repeat year after year – is a dominant and pervasive contributor to variations in our economy. But, as Roy Pearson writes in Giving Due Respect to seasonality in Monthly Forecasting, the seasonal adjustments we make to economic data are poorly understood and lead to confusion in interpreting sales changes. Improved accounting for seasonality for monthly forecasts over 12-24 months can lead to better understanding of the forces behind sales forecasts, and very likely to some reduction in forecast errors.

 

Post a Comment

Brilliant forecasting article from 1957!!!

Brilliant, humorous, and obscure. Those words could describe two of my favorite comedians, Emo Philips* and the late Dennis Wolfberg.

They could also describe, with the addition of "exceedingly" brilliant, "scathingly" humorous, and "apparently totally" obscure, a 1957 article, "Two Important Problems in Sales Forecasting" by James H. Lorie (The Journal of Business Vol. 30, No. 3 (July 1957), pp. 172-179).

Lorie is not an unknown. When the article appeared, he was Associate Dean at the University of Chicago, School of Business. He is credited with creating the first database of stock exchange prices, allowing the type of stock analysis we take for granted today.

Yet according to Google Scholar, the article has been cited just 11 times (none since 1991), and never in any of the familiar forecasting journals or texts. I didn't find it last year while researching an article on the role of the sales force in forecasting (Foresight 35 (Fall 2014), pp. 8-13), and only came across it last week cited as a reference -- within a reference -- to Igor Gusakov's "Data-Cube Forecasting for the Forecasting Support System" (pp. 25-32 in the same issue of Foresight).

Problem 1: Combining Statistical Analysis with Subjective Judgment

Lorie first addresses the (still unresolved) challenge of "combining the wisdom of experienced businessmen with statistical analysis...in order to achieve better forecasts." (There is no mention of businesswomen, who apparently didn't exist until Peggy Olson on Mad Men.)

Lorie reviews and critiques the common statistical forecasting methods of the time: regression and correlation. (Recall that R.G. Brown's Exponential Smoothing for Predicting Demand published just the year before.) Of the former,

Perhaps a more fundamental objection to regression analysis as a means for forecasting is that it merely transforms the forecasting problem from the dependent variable to the independent variables. It requires that the analyst forecast the levels of the independent variables such as national income or industry sales rather than the level of the dependent variable, sales of a particular company's product. There is certainly very little reason to believe that forecasters have been markedly more successful in forecasting the kinds of variables which are typically considered to be independent in forecasting equations than they have been in forecasting the variables which are considered dependent.

And of the latter,

In spite of the grave limitations of correlation analysis, it will undoubtedly continue to be widely used. One of the reasons is that it is one of the very few techniques which can be readily learned by people receiving low wages and which has the comforting -- albeit superficial -- appearance of "scientific" precision.

Lorie also notes, as is now accepted in many quarters, that

...it is unreasonable to expect that more complicated massaging of numbers according to conventional statistical techniques is likely to produce very much more successful results in the future.

A similar sentiment appeared in my favorite forecasting article of the 21st century (Makridakis & Taleb, "Living in a World of Low Levels of Predictability," International Journal of Forecasting Vol. 25, No. 4 (Oct-Dec 2009), pp. 840-844):

  • Statistically sophisticated, or complex, models fit past data well, but do not necessarily predict the future accurately...
  • "Simple" models do not necessarily fit past data well, but predict the future better than complex or sophisticated statistical models.

We'll continue the Lorie synopsis in the next post...

------------

*Philips is not so obscure among learned forecasters, as he was quoted in a 2013 Foresight article by Roy Batchelor: "A computer once beat me at chess. But it was no match for me at kickboxing." However, I have yet to find academic citations for Wolfberg's "The Bris" or "The Rigid Sigmoidoscopy."

Post a Comment

ATM Replenishment: Forecasting + Optimization

Why do people steal ATMs? Because that's where the money is!!!

While the old "smash-n-grab" remains a favorite modus operandi of would-be ATM thieves, the biggest brains on the planet typically aren't engaged in such endeavors (see Thieves Steal Empty ATM, Chain Breaks Dragging Stolen ATM, An A for Effort).

And of course, as we learned in Breaking Bad, successfully stealing an ATM (but then insulting your crime partner), can have unfortunate mind-numbing consequences.

The ATM Replenishment Problem

Suppose you operate hundreds of ATMs, processing millions of customer transactions a month. You want to keep your customers happy (no out-of-cash or other down time situations), yet minimize the cost of restocking the machines.

It turns out that managing ATMs is even more difficult than stealing one, and this was the challenge faced by DBS Bank in Singapore.  With a network of 1100 ATMs, there is an ever-present threat of inconveniencing customers any time an ATM runs out of cash, or is otherwise out of service. Replenishment trips are costly (can you imagine the gas mileage on those armored trucks, even with oil under $50/barrel?). And when you reload an ATM that isn't running low on cash, you lose in two ways (wasting resources on an unnecessary trip, and temporarily making the ATM unavailable to customers while being reloaded.)

Fortunately there are bigger brains than the criminals thinking about the ATM replenishment problem. With the help of my colleagues from SAS Advanced Analytics R&D, DBS solved their problem and received top honors from the Singapore government for Most Innovative Use of Infocomm Technology. (See this write-up from Analytics magazine.)

Forecasting + Optimization

ATM replenishment is a perfect example of combining two areas of advanced analytics, forecasting and optimization. For DBS Bank, the first step was to understand withdrawal activity. Withdrawal rate is impacted by many factors, such as location, day of week, day of month, and time of day, and can be dramatically impacted by holidays or other special events.

Once you have a reasonably reliable forecast of customer activity at each ATM location, the next step (which helped DBS win the honors) is to convert the forecast into a daily execution plan for optimal reloading at just the right time. Since implementing the solution, DBS has been able to reduce cash-outs by 90%, reduce the number of customers impacted by the reloading process by 350,000 versus prior year, reduce the amount of returned cash (that was leftover in the ATM when it was reloaded) by 30%, and reduce the number of costly replenishment trips by 10%!

There are plenty of applications of forecasting + optimization outside ATM replenishment. For example, any company operating multiple production or distribution sites (or considering opening new ones) could benefit from a similar approach. First, get a good understanding of the timing and geographical location of customer demand. Then, optimize the placement of facilities or production lines. Revenue management, used by airlines and hotels to dynamically adjust pricing, is another example.

 

Post a Comment
  • About this blog

    Michael Gilliland is a longtime business forecasting practitioner and currently Product Marketing Manager for SAS Forecasting. He initiated The Business Forecasting Deal to help expose the seamy underbelly of the forecasting practice, and to provide practical solutions to its most vexing problems.
  • Subscribe to this blog

    Enter your email address:

    Other subscription options

  • Archives