Guest blogger: Len Tashman previews Spring issue of Foresight

Editor Len Tashman's Preview of the Spring issue of Foresight

Len Tashman

Len Tashman

The Special Feature article of this 37th issue of Foresight From Sales and Operations Planning to Business Integration – comes about through a rare but effective collaboration between an academic and a practitioner. The coauthors are Mark Moon, head of the Department of Marketing and Supply Chain Management at the University of Tennessee, Knoxville and author of Demand and Supply Integration: The Key to World-Class Demand Forecasting, and Pete Alle, Vice President of Supply Chain for Oberweis Dairy and overseer of global process improvement and innovation across that company’s supply chain.

The article identifies the failings of organizational implementations of S&OP, including (a) planning horizons that are too short-term, (b) a process driven by the supply chain, and (c) a retrospective rather than forward-looking agenda in the monthly planning meetings.

To effectively drive business integration, the authors argue, changes may be required in organizational structure, processes, and culture, with leadership commitments, employee incentive structures, and training comprising critical components.

Next up is our section on Strategic Forecasting, which looks outside our immediate planning fences. Businesses too often have only a short-term focus in their planning and operations, taking a reactive posture to crises in their supply chains while failing to consider and manage events that present important risks and opportunities. So argue Tom Goldsby, Chad Autry, and John Bell, the authors of Thinking Big! Incorporating Macrotrends into Supply Chain Planning and Execution. They examine four key macrotrends and describe their potentially profound effects on corporate supply chains. These macrotrends are (1) global population growth and shifts, (2) interconnectedness and social leveling, (3) climate change and sustainability, and (4) resource scarcity and conflicts.

Forecasts of global population growth are a lynchpin for strategic planning, but it’s a questionable assumption that even official forecasts can be considered reliable. In The United Nations Probabilistic Population Projections: An Introduction to Demographic Forecasting with Uncertainty, noted population-forecasting experts Leontine Alkema, Patrick Gerland, Adrian E. Raftery, and John R. Wilmoth describe the state of the art here in providing margins of error around the best point forecasts. Their examples show just how wide the area of uncertainty is, providing a cautionary note to long-range forecasters and strategic planners.

Have Corporate Prediction Markets Had Their Heyday? Despite great acclaim for the potential of prediction markets as an efficient tool for aggregating individual judgments, corporate adoption has been limited. To find out why, Thomas Wolfram examined existing research and interviewed several dozen key business executives. He reports that problems with overall acceptance of the corporate prediction market (CPM) often stem from a lack of trust by management as well as a greater business focus these days on big data and social media content. Hope for the CPM persists, though – if certain obstacles can be finessed.

Intermittent demands have been a bugaboo for corporate forecasters, and numerous articles have been published – several in Foresight – that show how to make the best of a very difficult methodological challenge. An important aspect of the problem is the basis upon which we choose a “best” forecasting method. In his new article Measuring the Quality of Intermittent-Demand Forecasts: It’s Worse than We’ve Thought, Steve Morlidge clearly demonstrates that we must rethink the use of our most common accuracy metrics for selecting that optimal forecast method. The problem is acute, he writes, because many software applications use these metrics for performance evaluation and method selection; in doing so, they potentially provide us with poor feedback and inferior models, resulting in harmful consequences for inventory management.

Steve evaluates various metrics that have been offered to assess the performance of intermittent-demand models and proposes a bias-adjusted error metric that may be superior to its predecessors.

Foresight Cover

Foresight

Our book review in this issue examines a new text that seeks to integrate demand forecasting with inventory management: Demand Forecasting for Inventory Control by Nick T. Thomopoulos, Professor Emeritus of the Illinois Institute of Technology. Reviewer Stephan Kolassa, Foresight’s Associate Editor, presents the high hopes he had for the book and goes on to frankly discuss his likes and misgivings.

This Spring 2015 issue concludes with our latest Forecaster in the Field interview, introducing Fotios Petropoulos, Foresight’s new Editor for Forecasting Support Systems. Fotios will be speaking at the upcoming International Symposium on Forecasting (ISF) on ways forward for our forecasting support systems, part of Foresight’s…

Forecasting in Practice Track

The 35th annual ISF will be held in Riverside, California from June 21-24. Riverside was rated America’s eighth coolest city in a Forbes survey last year.

For the past three and a half decades, the ISF has been the principal forum for presentation of new research on forecasting methods and practices, bringing together researchers and practitioners from more than 30 countries around the world. View the 2015 program and its speakers at www.forecasters.org/isf/.

An integral part of this year’s ISF is the Forecasting in Practice track. The nine speakers in this track will examine forecasting and planning processes, and offer proposals for upgrading our forecasting performance:

  • Sustaining a Collaborative Forecasting Process  Simon Clarke, Coca-Cola Refreshments
  • Improving Forecast Quality in Practice  Robert Fildes, Lancaster University
  • Managing Supply Chains in a Digital Interconnected Economy  Ram Ganeshan, College of William and Mary
  • Role of the Sales Force in Forecasting  Michael Gilliland, SAS Institute, Inc.
  • Macro Trends and Their Implications for Supply Chain Management  Thomas Goldsby, The Ohio State University
  • Harnessing your Judgment to Achieve Greater Forecast Accuracy  Paul Goodwin, University of Bath
  • Strategies for Demand and Supply Integration  John Mello, Arkansas State University
  • Managing the Performance of a Forecasting Process  Steve Morlidge, Satori Partners
  • Forecasting Support Systems: Ways Forward  Fotios Petropoulos, Cardiff University
Post a Comment

Guest blogger: Dr. Chaman Jain previews Winter issue of Journal of Business Forecasting

JBF Special Issue on Predictive Business Analytics

Dr. Chaman Jain

Dr. Chaman Jain

Dr. Chaman Jain, professor at St. Johns University, and editor-in-chief of the Journal of Business Forecasting, provides his preview of the Winter 2014-15 issue:

Predictive Business Analytics, the practice of extracting information from existing data to determine patterns, relationships and future outcomes, is not new; it has been used in practice for many years. What is new, however, is the massive amount of data, so-called Big Data, now available, that can better support decision making and improve planning & forecasting performance. Additionally, we now have access to technology that can store, process, and analyze large amounts of data. The analysis modules found in these tools have algorithms that can handle even the most complex of problems. Above all, there is a growing awareness among businesses, data scientists, and those with data responsibilities that there is a wealth of hidden information that should be tapped for better decision making.

To make the most of Predictive Business Analytics, we need to prioritize what problems we’re trying to solve, which data and signals we should collect, where they are located, and how they can be retrieved and compiled. Once these steps have been taken, the data must be filtered and cleansed in preparation for analysis with a predictive analytics software tool, which can be MS-Excel, an open source application, or the many other available applications in the marketplace.

Predictive Analytics and finding solutions from Big Data have no boundaries. It’s applicable to nearly every industry and discipline. Ying Liu demonstrates how this is currently being used by the retail industry, as well as in education, healthcare, banking, and entertainment industries. Macy’s, for example, on a daily basis, checks the competitive prices of 10,000 articles to develop its counter price strategy. Other authors of this issue talk mostly about its application in the areas of Demand Planning and Forecasting. In contrast, Charles Chase illustrates how POS data can be used in sensing and shaping demand to improve sales and profits, while John Gallucci demonstrates how it can be used in managing new product launches.

Cover of JBF

Journal of Business Forecasting

Gregory L. Schlegel discusses not only how Predictive Analytics can be used to optimize the supply chain, but also how it can manage risks. He talks about how credit card and insurance companies have reduced their overall risk by using predictive analytics. In addition, he provides case studies of companies in the CPG/Grocery, High-Tech, and Automobile industries that are using them to drastically improve their bottom line. He talks about how Volvo Car Corporation is targeting zero accidents and zero deaths/injuries by gaining insight from data downloaded each time their automobiles come to dealers for servicing. Using this data, Volvo wants to determine if there are defective parts that require correction, and/or if there are issues with driver behavior, all needing to be addressed.

Eric Wilson and Mark Demers explain that with Predictive Analytics, businesses are looking to apply a more inductive analysis approach, not in lieu of but in addition to, a deductive analysis. In inductive analysis, we set hypotheses and theories, and then look for strong evidence of the truth. In other words, we search for what the data would tell us if they could talk. This is in contrast with deductive reasoning where we draw conclusions from the past.

Allan Milliken outlines the processes that Demand Planners should follow to take full advantage of the power of Predictive Analytics. It can help to classify products that are forecastable and those that are not, enabling them to arrive at a consistent policy regarding which products should be built-to-order and which for built-to-stock. It can also flag exceptions, thereby enabling established corrective actions to be applied before it is too late. Larry Lapide demonstrates how Demand Planners can extract demand signals from the myriad of data gathered from various sources including a vast amount of electronic data drawn from the Internet. Mark Lawless discusses Predictive Analytics as a tool to peek into the mind of consumers to determine what types of products and services they are likely to purchase.

It is true that Predictive Analytics is not revolutionary, but its applications can certainly be considered innovative. The significant impact it can have will continue to change the way businesses are managing demand, production, procurement, logistics, and more. There is an overwhelming belief that companies that can identify the right data and leverage it into actionable information are more likely to improve decisions, act in a timely manner, and have a true competitive advantage. Advancing your organizations’ analytics capability can be the great difference maker between companies that are performing well, and those that are not.

Your comments on this special issue on Predictive Business Analytics & Big Data are welcome. I look forward to hearing from you.

Happy Forecasting!

Chaman L. Jain, Editor

St. John’s University | Email: Jainc@Stjohns.edu

IBF FlyerIBF's Predictive Business Analytics Conference

Just a reminder that the Institute of Business Forecasting's first Predictive Business Analytics Forecasting & Planning Conference will be held in Atlanta, April 22-24. Speakers include my colleague Charlie Chase, and Greg Schlegel, who also contributed articles to the special issue of JBF.

 

 

Post a Comment

New! "Demand-Driven Forecasting" course from SAS

Charlie Chase

Charlie Chase

My colleague Charlie Chase, Advisory Industry Consultant and author of the book Demand-Driven Forecasting, has developed a new course for the SAS Business Knowledge Series (BKS): Best Practices in Demand-Driven Forecasting.

The 2-day course will be offered for the first time April 20-21 in Atlanta (and then again September 24-25 in Chicago). From the description:

Demand Forecasting continues to be one of the most sought-after objectives for improving supply-chain management across all industries around the world. Over the past two decades companies have ignored demand forecasting as they have chosen to improve upstream efficiencies related to supply-driven planning activities. Due to recent economic conditions, globalization, and market dynamics, companies are realizing that demand forecasting and planning affects almost every aspect of the supply chain. Also, companies have learned that focusing exclusively on supply is a receipt for failure. As a result, there is a renewed focus on improving the accuracy of their demand response. This course provides a structured framework to transition companies from being supply-driven to becoming demand-driven with an emphasis on customer excellence, rather than operation excellence.

Attendees receive a complimentary copy of Demand-Driven Forecasting.

Cover of Demand-Driven ForecastingStick around after the Atlanta course to attend the IBF conference on Predictive Business Analytics, where Charlie will be co-presenting with Chad Schumacher of Kellogg's on the topic of Using Multitiered Causal Analysis. (Meantime, download the whitepaper Using Multitiered Causal Analysis to Improve Demand Forecasts and Optimize Marketing Strategy).

Also, be on the lookout for Charlie's new blog which will be starting soon. (A link will be provided when available.)

Find more information about the course in Maggie Miller's interview, "6 Questions with Forecasting Expert Charlie Chase."

More Learning Options from the SAS Business Knowledge Series

A recent BFD post characterized "Offensive vs. Defensive Forecasting," and Charlie's course covers the offensive side. But the BKS also covers the defensive side of forecasting in the course Forecast Value Added Analysis that will be offered over the web on the afternoons of May 7-8, and again September 28-29.

My colleague Chip Wells (co-author of Applied Data Mining for Forecasting Using SAS) has expanded my 3-hour FVA workshop into two half-day sessions, providing additional content, exercises, and examples. Chip brings a strong statistical and data mining background to the topic of FVA, as well as considerable experience teaching and consulting with SAS forecasting customers. For anyone seeking to expand their knowledge of FVA analysis, this is a great way to do so from the comfort of your office. (Get started by downloading the whitepaper Forecast Value Added Analysis: Step-by-Step.)

 

 

 

Post a Comment

SAS analytics and forecasting news

♦We learned this week that SAS is ranked #4 on Fortune's 100 Best Companies to Work For in 2015. This makes six straight years ranking in the top four (including twice at #1).

Analytics Magazine Cover♦The March/April 2015 issue of Analytics Magazine includes a SAS company profile by my colleague Kathy Lange. As a publication of INFORMS (the Institute for Operations Research and the Management Sciences), the profile focuses on SAS offerings  in advanced analytics and operations research (including optimization and discrete event simulation).

IBF Flyer♦SAS exhibited at last month's Institute of Business Forecasting Supply Chain Forecasting Conference.

♦In April, my colleague Charlie Chase will be co-presenting (with Chad Schumacher of Kellogg's) at IBF's first Predictive Business Analytics Forecasting & Planning Conference (Atlanta, April 22-24). Their topic is "Using Multi-Tiered Causal Analysis to Synchonize Demand and Supply." Charlie has written extensively about MTCA in his book Demand-Driven Forecasting, and you can download the SAS whitepaper "Using MTCA to Improve Demand Forecasts and Optimize Marketing Strategy."

♦SAS Research & Development has been busy as usual, and in 2014 was awarded five forecasting-related patents:

  • Computer-implemented systems and methods for flexible definition of time intervals

Inventors: Tammy Jackson, Michael Leonard, Keith Crowe

  • Attribute based hierarchy management for estimation and forecasting

Inventors: Burak Meric, Alex Chien, Thomas Burkhardt

  •  Systems and methods for propagating changes in a demand planning hierarchy

Inventor: Vic Richard

  • System and methods for retail forecasting utilizing forecast model accuracy criteria, holdout samples and marketing mix data

Inventors: Alex Chien and Yongqiao Xiao

  • Computer-implemented systems and methods for forecasting and estimation using grid regression

Inventor: Vijay S. Desai

Post a Comment

Don't fine tune your forecast!

Does your forecast look like a radio? No? Then don't treat it like one.

Image of RadioA radio's tuning knob serves a valid purpose. It lets you make fine adjustments, improving reception of the incoming signal, resulting in a clearer and more enjoyable listening experience.

But just because you can make fine adjustments to your forecast, doesn't mean you should. In fact, you shouldn't.

Two Things Can Happen -- And One of Them is Bad

Famed college football coach Woody Hayes (fired unceremoniously in 1978 for punching an opposing player) was known for powerful teams that ran the ball, eschewing the forward pass. Of the latter, he is credited with saying "When you pass the ball three things can happen, and two of them are bad." [For those unfamiliar with American football, the good thing is a pass completion, and the bad things are an incompletion or an interception by the opposing team.]

Whenever you adjust a forecast two things can happen -- you can improve the accuracy of the forecast, or make it worse.

Obviously, if you make the adjustment in the wrong direction (e.g., lowering the forecast when actuals turn out to be higher), a bad thing has happened -- you've made the forecast worse. But you can also make overly aggressive adjustments in the right direction and overshoot, making the forecast worse. (For example, initial forecast of 100, adjusted forecast of 110, actual turns out to be 104.)

When you make just a small adjustment, there is little chance of overshooting. So as long as you are directionally correct, you have improved the forecast. But even if we assume every small adjustment is directionally correct, is that reason enough to spend time making small adjustments?

No. And here's why not:

First recognize that "small adjustment" means small as a percentage of the original forecast. So changing a forecast from 100 to 101 is a "small" adjustment, just 1%. Likewise, changing 1,256,315 to 1,250,000 would be considered a small adjustment (0.5%) even though the change is over 6300 units.

Another way to characterize adjustments is their relevance -- whether they are significant enough to cause changes in decisions and plans.

On this criterion, small adjustments are mostly irrelevant. An organization is probably not going to grind to a halt, scuttle existing plans, and suddenly change direction just because of a 1% adjustment in a forecast.

[Note that even "large" forecast adjustments may be irrelevant, when they don't require any change in plans. This could happen for very low value items, such as 1/4" galvanized washers sold at a hardware store. Such items are usually managed via simple replenishment rules, like a two-bin inventory control system. Unless the forecast change is so large that current bin sizes are deemed inappropriate, no action will be taken.]

Can't Small Adjustments Make a Big Improvement in Accuracy?

It's true that even a small adjustment can make a big improvement in forecast accuracy. Changing the forecast from 100 to 101, when actuals turn out to be 102, means you cut the error in half! (On the other hand, if actuals turned out to be 200, then you only reduced forecast error by 1%.)

But the purpose of forecasting is to help managers make better decisions, devise better plans, and run a more effective and profitable organization. Improved forecast accuracy, in itself, has no value unless it results in improved organizational performance.

So if a small forecast adjustment does not change any of the behavior (or resulting outcomes) of the organization -- why bother??? Making small adjustments takes effort and resources, but is simply a waste of time.

Post a Comment

Offensive vs. defensive forecasting

Sports provide us with many familiar clichés about playing defense, such as:

  • Defense wins championships.
  • The best defense is a good offense.

Or my favorite:

  • The best defense is the one that ranks first statistically in overall defensive performance, after controlling for the quality of the offenses it has faced.

Perhaps not the sort of thing you hear from noted scholars of the game like Charles Barkley, Dickie V, or the multiply-concussed crew of Fox NFL announcers. But it captures the essential fact that performance evaluation, when done in isolation, may lead to improper conclusions. (A team that plays a weak schedule should have better defensive statistics than one that plays only against championship caliber teams.)

Likewise, when we evaluate forecasting performance, we can't look simply at the MAPE (or other traditional metric) that is being used. We have to look at the difficulty of the forecasting task, and judge performance relative to the difficulty.

Offensive Forecasting

It is possible to characterize forecasting efforts as either offensive or defensive.

Offensive efforts are the things we do to extract every last bit of accuracy we can hope to achieve. This includes gathering more data, building more sophisticated models, and incorporating more human inputs into the process.

Doing these things will certainly add cost to the forecasting process. The hope is that they will make the forecast more accurate and less biased. (Just be aware, by a curious quirk of nature, that complexity may be contrary to improved accuracy, as the forthcoming Green & Armstrong article "Simple versus complex forecasting: The evidence" discusses.)

Heroic efforts may be justified for important, high-value forecasts, that have significant impact on overall company success. But for most things we forecast, it is sufficient to come up with a number that is "good enough" to make a planning decision. An extra percentage point or two of forecast accuracy -- even if it could be achieved -- just isn't worth the effort.

Defensive Forecasting

A defensive forecaster is not so much concerned with how good a forecast can be, but rather, with avoiding how bad a forecast can be.

Defensive forecasters recognize that most organizations fail to achieve the best possible forecasts. And many organizations actually forecast worse than doing nothing (and instead  just using the latest observation as the forecast). As Steve Morlidge reported in Foresight, 52% of the forecasts in his study sample failed to improve upon the naïve model. So more than half the time, these organizations were spending resources just to make the forecast worse.

The defensive forecaster can use FVA analysis to identify those forecast process steps that are failing to improve the forecast. The primary objective is to weed out wasted efforts, to stop making the forecast worse, and to forecast at least as well as the naïve model.

Once the organization is forecasting at least as well as the naïve model, then it is time to hand matters back over to the offensive forecasters -- to extract every last percent of accuracy that is possible.

 

 

Post a Comment

FVA interview with Shaun Snapp

Shaun Snapp

Shaun Snapp

The Institute of Business Forecasting's FVA blog series continued in January, with my interview of Shaun Snapp, founder and editor of SCM Focus.

Some of Shaun's answers surprised me, for example, that he doesn't compare performance to a naïve model (which I see as the  most fundamental FVA comparison). But he went on to explain that his consulting mainly involves software implementation and tuning (often with SAP). His work stops once they system is up and generating a forecast, so he is generally not involved directly with the planners or the forecasting process.

Shaun notes that most of the companies he works with don't rely on the statistical forecast generated by their software -- that planners have free reign to adjust the forecasts. And yet, because it takes effort to track the value of their forecast adjustments, it doesn't get done (planners are too busy making all their adjustments to step back and measure their own impact!).

He also notes that most of his work is focused on technical system issues -- he's found little demand for forecast input testing or other FVA-related services he can provide.

Discouragingly, he states he's never found a forecasting group that based its design on FVA. While clients may be receptive to the basic idea -- of applying a scientific approach to evaluating forecasting performance -- there are actually some groups where FVA is contrary to their interests. He gives the example of a sales group whose main interest is for the company to maintain an in-stock position on all their items. They have a lot of power within the company, and can achieve their objective by biasing the forecast high, thus forcing the supply chain to maintain excess inventory.

Read the full Shaun Snapp interview, and others in the FVA series, at www.demand-planning.com.

Coming in February: Interview with Steve Morlidge of CatchBull

Steve has been subject of several previous BFD blog posts, exploring his groundbreaking work on the "avoidability" of forecast error (Part 1 of 4), forecast quality in the supply chain (Part 1 of 2), and a Q&A on his research (Part 1 of 4). He also delivered a Foresight/SAS Webinar "Avoidability of Forecast Error" that is available for on-demand review. Check the IBF blog site later this month for Steve's FVA interview.

 

 

 

Post a Comment

Brilliant forecasting article from 1957!!! (Part 3)

This isn't such a brilliant article because we learn something new from it -- we really don't. But it is amazing to find, from someone in 1957, such a clear discussion of forecasting issues that still plague us today. If you can get past some of the Mad Men era words and phrasing, the article is wonderfully written and a fun read -- full of sarcastic digs at the forecasting practice.

In this final installment we'll look at Lorie's handling of forecasting performance evaluation.

Problem 2: The Evaluation of Forecasts

Lorie states there are two main problems in evaluating forecasting:

  1. Determining accuracy.
  2. Determining economic usefulness.

To solve these, he suggests three principles:

A. The Superiority of Written Forecasts

When forecasts are not recorded, the usual consequence is that they "seem to become more and more accurate as they recede into the past where memory is inexact and usually comforting." But even when written down, there is danger of ambiguity.

Lorie takes special aim at financial analysts and economic forecasters, who find it "distressingly easy" to use broad designations like "markets" or "business activity" or "sales." Of course, without a rigorous operational definition of such terms, the accuracy of the forecasts cannot be judged. "Their usefulness, however, can; their usefulness is negligible."

Lorie's position is largely in line with Nate Silver's recent critique of economic forecasting as an "almost complete failure."

In addition to recording forecasts in a way specific enough to be measured (typically product, location, time period, units), Lorie argues for recording the method used to generate the forecast:

The absence of a record of the forecasting method makes it extremely difficult to judge what has been successful and what unsuccessful among the techniques for peering into the future.

By method, I will interpret this to mean, at a high level, what forecasting process was used. For example,

STATISTICAL FORECAST ==> ANALYST OVERRIDE ==> CONSENSUS OVERRIDE

Over time we can determine whether these individual steps are making the forecast any better (or worse) than using a simple naïve model.

B. The Statistical Evaluation of Forecasting Techniques

Today there is growing recognition that relative metrics of forecasting performance are much more relevant and useful than the traditional accuracy or error metric by itself.

For example, to be told "MAPE=30%" is only mildly interesting. By itself, MAPE gives no indication of how easy or difficult a series is to forecast. It doesn't tell us what error would be reasonable to expect for the given series, and consequently, does not tell us whether our forecasting efforts were good or bad.

It is only by viewing the MAPE in comparison to some baseline of performance (e.g., the MAPE of a naïve forecast), that we can determine the "value added" by our forecasting efforts. This is what relative metrics such as FVA let you do.

Lorie gives an example: Each day the weather forecaster in St. Petersburg, Florida can forecast the following day's weather to be clear and sunny, and by doing nothing will be correct 95% of the time. The forecaster in Chicago, even using the latest technology and most sophisticated methods, will only be get the following day's forecast right 80% of the time. So does this mean the St. Petersburg forecaster is more skilled at his profession than the Chicago forecaster? Of course not!

If there is a point to the preceding example, it is that the statistical evaluation of forecasting techniques must take account of the variability of the series being forecast...the forecasting task in Chicago is much more difficult.

What is desired is measurement of the "marginal" contribution of the forecasting technique. What is desired is an indication of the extent to which one can forecast better because of the use of the forecasting technique than would be possible by sole reliance on some simple, cheap, and objective forecasting device.

Lorie has provided an almost perfect description of FVA analysis. In essence, it is nothing more than the application of basic scientific method to the evaluation of a forecasting process.

C. The Economic Evaluation of Forecasts

There can be asymmetry in the cost of our business decisions, that is clearly true. For example, it makes sense to carry excess inventory on an item which costs us little to make and hold, yet yields huge revenue when sold. (Carrying too little inventory might save us a little on cost, yet we'd miss a lot of revenue on lost sales.)

Lorie asserts:

A forecasting technique is to judged to be superior to alternatives according to an economic evaluation if the consequences of decisions based upon it are more profitable than decisions based upon the alternatives.

This seems to be saying that it is ok to bias your forecasts in a direction that is more economically favorable, but I disagree. While it is appropriate to bias your plans and actions in a way that will provide a more favorable economic outcome (as in the example above), I would contend that the forecast should remain an "unbiased best guess" at what is really going to happen in the future.

I'm not convinced there can be an economic evaluation of the forecast. (Evaluating a forecast solely on accuracy, bias, and FVA may be sufficient). However, there should be an economic evaluation of the decision that was made.

Post a Comment

Brilliant forecasting article from 1957!!! (Part 2)

Combining Statistical Analysis with Subjective Judgment (continued)

After summarily dismissing regression analysis and correlation analysis as panaceas for the business forecasting problem, Lorie turns next to "salesmen's forecasts."* He first echoes the assumption that we still hear today:

This technique of sales forecasting has much to commend it. It is based upon a systematic collection and analysis of the opinions of men who, among all the company's employees, are in closest contact with dealers and ultimate consumers.

But Lorie points out the "inherent deficiencies" of relying solely on sales force input, "for which it may be impossible to devise effective remedies." These are:

  • Unreasonably assumes that sales people have the "breadth and depth of understanding" of the pervasive influences on demand. (Do they have any skill at forecasting?)
  • Sales jobs turn over frequently, so sales people providing forecasts are often inexperienced, and we don't have enough data to determine their biases. (Will they give us an honest answer?)
  • Does not incorporate "competent statistical analysis" of historical sales data which could be combined with the sales force inputs.

Lorie also disses the use of consumer surveys as costly, impractical, and unproven to be of value except in limited circumstances.

Two Solutions

The message is not all negative. Lorie provides two solutions to combing statistics with judgment, the filter technique and the skeptic's technique. I'm not as much interested in the specific techniques as in his overall approach to the problem -- which in the filter technique is to focus on economy of process. Start "with an extremely simple and cheap process to which additional time and money are devoted only up to the point at which the process becomes satisfactory."

...the process provides an objective record of both sales forecasts and the methods by which they are made so that study of this record can be a means for continual improvement in the forecasting process.

(You can find details about the filter technique in the article.)

The skeptic's technique applies process control ideas, akin to Joseph & Finney's "Using Process Behaviour Charts to Improve Forecasting and Decision-Making" (Foresight 31 (Fall 2013), pp. 41-48). Starting with "limited faith" in the persistence of historical forces that affect sales:

  • Project future sales with a simple trend line.
  • Compute two standard deviations on each side of the line to create a range within which future sales should fall the vast majority of time (if historical forces continue to work in the same way).

Lorie points out that this work could be done by statistical clerks "whose rate of pay is substantially less than that of barbers or plumbers."

  • The forecaster then solicits company experts (who, "incidentally, usually receive substantially more than barbers or even plumbers").
  • If the expert's forecasts falls within the range limits of the statistical forecast, it is accepted. If outside the limits, even after reconsideration (asking "the gods for another omen"), the forecaster has to make a decision what to do.

Lorie wryly points out that making a decision is something the forecaster has avoided up to this point.

For expert forecasts outside the statistical forecast limits, Lorie states:

...experience has indicated that the forecast in a vast majority of cases would have been more accurate if the experts' forecast had arbitrarily been moved to the nearest control limit provided by the statistical clerk rather than being accepted as it was.

In Part 3 we'll look at Lorie's remarks on the evaluation of forecasts -- and his 1957 precursor to what we now call FVA!

---------------

*The role of the sales force in forecasting is subject of my recent Foresight article (Fall 2014), and a forthcoming presentation at the International Symposium on Forecasting (Riverside, CA, June 24-27).

Post a Comment

Guest blogger: Len Tashman previews Winter 2015 issue of Foresight

*** We interrupt discussion of James H. Lorie's 1957 article with this important announcement ***

Photo of Len Tashman

Len Tashman

Hot off the wire, here is editor Len Tashman's preview of the Winter 2015 issue of Foresight:

Foresight kicks off its 10th year with the publication of a new survey of business forecasters: Improving Forecast Quality in Practice. This ongoing survey, designed at the Lancaster Centre for Forecasting in the UK, seeks to gain insights on where the emphasis should be put to further upgrade the quality of our forecasting practices. Initial survey results, presented by Robert Fildes, Director of the Lancaster Centre, and Fotios Petropoulos, former member of the Centre, examine these key aspects of forecasting practice: organizational constraints, the flow of information, forecasting software, organizational resources, forecasting techniques employed, and the monitoring and evaluation of forecast accuracy.

The survey is an important update to that conducted more than a decade ago by Mark Moon, Tom Mentzer, and Carlo Smith of the University of Tennessee. In his Commentary on the Lancaster survey, Mark Moon applauds the broad focus of the survey but raises the issue of whether the "practicing forecasters” surveyed are “developers” or “customers” of the forecasts.

We often find a significant difference in perception between those who are responsible for creating a forecast and those that use the forecast to create business plans.

In our section on Collaborative Forecasting and Planning, Foresight S&OP Editor John Mello writes that S&OP can not only improve collaboration within an organization, but also “change the company’s operational culture from one that is internally focused to one that better understands the potential benefits of working with other companies in the supply chain.” His article, Internal and External Collaboration: The Keys to Demand-Supply Integration, identifies and compares several promising avenues of external collaboration, including vendor-managed inventory (VMI); collaborative planning, forecasting, and replenishment (CPFR); retail-event collaboration; and various stock-replenishment methods currently in use by major manufacturers and retailers. The critical factor, John finds, is trust:

These processes all require the sharing of information between companies, joint agreement on the responsibilities of the individual companies, and a good deal of trust between the parties, since the responsibility for integrating supply and demand is often delegated to the supplier.

In a Commentary on the Mello article, Ram Ganeshan and Tonya Boone point out that the challenges of external collaboration arrangements are much greater when we consider their Extension Beyond Fast-Moving Consumer Goods, especially those goods with short life cycles. For these products, they argue, a different mind-set is required to achieve demand-supply integration.

Financial Forecasting Editor Roy Batchelor distills the lessons forecasters should learn from the failures to predict and control our recent global financial meltdown. A 2014 International Monetary Fund (IMF) report, Financial Crises: Causes, Consequences, and Policy Responses, examined the world economies’ 2007-09 financial crises to establish their causes and impacts, as well as the initiatives governments and central banks undertook to deal with them. The overall impression from this report, Roy writes in his review, entitled Financial Crises and Forecasting Failures, is that the authorities could have been speedier and more imaginative in their interventions in the financial sector. However, it is important to note that our forecasting models could have given a clearer picture of how economies might emerge from these crises. Roy probes into why the models didn’t see the crisis coming, and what upgrades to the models’ financial sectors might improve predictive performance in the future.

Jeffrey Mishlove’s Commentary on Roy’s review article argues that the real problem did not emanate from predictive failures, but rather from the inclination toward austerity that pervaded economic thinking, especially in Western Europe. Jeff says that, while he can’t argue with Roy’s conclusions that refinements in the scientific method and the gathering of empirical data are appropriate responses to financial crises, forecasts will always be vulnerable to confounding influences from unanticipated variables – no matter how much we refine and improve our methodologies.

Seasonality – intra-year patterns that repeat year after year – is a dominant and pervasive contributor to variations in our economy. But, as Roy Pearson writes in Giving Due Respect to seasonality in Monthly Forecasting, the seasonal adjustments we make to economic data are poorly understood and lead to confusion in interpreting sales changes. Improved accounting for seasonality for monthly forecasts over 12-24 months can lead to better understanding of the forces behind sales forecasts, and very likely to some reduction in forecast errors.

 

Post a Comment
  • About this blog

    Michael Gilliland is a longtime business forecasting practitioner and currently Product Marketing Manager for SAS Forecasting. He initiated The Business Forecasting Deal to help expose the seamy underbelly of the forecasting practice, and to provide practical solutions to its most vexing problems.
  • Subscribe to this blog

    Enter your email address:

    Other subscription options

  • Archives