Changing the paradigm for business forecasting (Part 12 of 12)

Aphorism 6: The Surest Way to Get a Better Forecast is to Make the Demand Forecastable

Forecast accuracy is largely dependent on volatility of demand, and demand variation is affected by our own organizational policies and practices. So an underused yet highly effective solution to the forecasting problem can be to just make the demand more forecastable!

Most (if not all) companies have some power to shape customer demand by adjusting pricing, promotions, supply, and distribution practices. Rather than passively accepting demand patterns as given, a more proactive approach would be to shape the demand into more favorable (i.e., less volatile, more predictable, and less costly to fulfill) patterns.

It is helpful to distinguish inherent volatility (the natural variation in the consumption of your products by consumers) from artificial volatility (the additional variation in sales or shipments due to organizational policies and practices).

It is not unusual to find the consumption of a consumer product (as indicated by the point-of-sale data at retail) to be fairly stable and predictable, yet shipments of the product (from manufacturer to retail store) to be highly erratic.

Shipments vs Consumption chartIn this chart, weekly consumer purchases at retail are shown by the thick line, which shows very little inherent volatility and could be forecast quite accurately with a simple model. However, the thin line shows shipments from the manufacturer to the retailer, and is three times more volatile.

The highly erratic and hockey stick patterns are indicative of artificial volatility – excess variation which is caused by sales, marketing, and financial practices, like creating incentives to spike demand to hit quarterly sales targets.

  • Corollary: Any knucklehead can forecast a straight line.

By identifying and eliminating those practices that encourage volatility, you will be able to smooth sales patterns. As a result: Reduced inventory and capacity requirements, better customer service, lower costs, and better forecasting are immediate outcomes.

For the final aphorism, I want to credit a wise man Cecil Moore, who I worked for in my last industry job as Director of Forecasting at Sara Lee Intimate Apparel. Cecil's guidance to us, was to:

Aphorism 7: Just Stop Doing the Stupid $#!+

This is the essence of the Defensive paradigm. Rather than make heroic efforts trying to eke out a fraction of a percent of accuracy through more elaborate and costly modeling and Offensive approaches -- simply stop doing the things that are just making the forecast worse.

With tools like FVA analysis, and the additional guidance and direction provided by the Morlidge articles in Foresight, it is quite possible to achieve more accurate forecasts with less effort and less cost.

What responsible business manager or executive could complain about something like that?

[See all 12 posts in the business forecasting paradigms series.]

Post a Comment

Changing the paradigm for business forecasting (Part 11 of 12)

Aphorism 3: Organizational Policies and Politics Can Have a Significant Impact on Forecasting Effectiveness

We just saw how demand volatility reduces forecastability. Yet our sales, marketing, and financial incentives are usually designed to add volatility. We reward sales spikes and record weeks, rather than smooth, stable, predictable growth.

The forecast should be an unbiased best guess of what is going to happen in the future. It should be based on a rational, objective, and dispassionate evaluation of the historical facts (what has been sold and under what conditions) and future plans and expectations (about pricing, promotional activities, the competitive environment, supply considerations, and the like).

Unfortunately, real-life business forecasting is often contaminated by the wishes, wants, and personal agendas of the forecasting process participants. Although the managers, executives, and sales force of your firm may be stellar citizens of the highest moral fiber (unless you work for Wells Fargo), you cannot assume they are trustworthy when it comes to forecasting.

  • Will your sales people forecast low during quota setting time, to make it easier to achieve their bonuses?
  • Will a product manager forecast high for a proposed new product, to make sure it meets the hurdles for approval?

You need to consider the ultimate motive of anyone providing or approving forecasts. And then track their performance using tools like FVA.

Aphorism 4: You May Not Control the Accuracy Achieved, But You Can Control the Process Used and the Resources You Invest

You can’t just buy a better forecast. There is no guarantee—no matter how much you spend on people, process, technology, and analytics—that you will achieve the accuracy your organization desires. So focus on what you can do, such as:

  • Determine what level of accuracy is reasonable to expect given the nature of your demand patterns.

Morlidge’s articles have good ideas on how to do this.

  • Direct all efforts toward achieving that level of accuracy with the least cost in time and company resources.

This is where FVA analysis, and Morlidge’s considerable enhancements to the FVA approach, can direct your efforts to the best opportunities for streamlining your forecasting process, and for accuracy improvement.

  • Automate, automate, automate wherever possible.

Large-scale automatic forecasting software (such as SAS Forecast Server) is available, and can deliver forecasts about as accurate as can reasonably be expected, with minimal analyst involvement. Automation can minimize the human touch points, reducing the chance for bias, politics, and personal agendas to contaminate the forecast. And as a Corollary:

  • Do not squander organizational resources in pursuit of unrealistic accuracy goals.

When the forecast is good enough for decision making purposes, or has reached the limit of achievable accuracy, no need to waste any more time trying to improve it…move on to something else.

Aphorism 5: Minimize the Organization's Reliance on Forecasting

We forecast not because we want to, but because we have to. Yet it may be possible to reduce an organization’s need for highly accurate forecasts. This can be achieved, for example, by improving the speed and flexibility of the supply chain.

Things like reducing lead times.  Going from make-to-stock to make-to-order. Or at least postponing final product configuration until an order comes in.

So you can react to demand rather than have to anticipate it.

It’s getting yourself out of the business of forecasting.

[See all 12 posts in the business forecasting paradigms series.]

Post a Comment

Changing the Paradigm for Business Forecasting (Part 10 of 12)

The Aphorisms of the New Defensive Paradigm

I want to finish this blog series with a set of 7 aphorisms – concise statements of principle – that characterize the new Defensive paradigm for business forecasting. The first is that:

Aphorism 1: Forecasting is a Huge Waste of Management Time

This doesn’t mean that forecasting is pointless and irrelevant. It doesn’t mean that forecasting isn’t useful and necessary to run our organizations. And it doesn’t mean that business managers and executives should stop caring about their forecasting issues or stop trying to resolve them.

It just means that the amount of time, effort, and resources spent on forecasting is not commensurate with the benefit achieved – the improvement in accuracy.

We spend far too many resources generating, reviewing, adjusting, and approving our forecasts, while almost invariably failing to achieve the level of accuracy desired. The evidence now shows that a large proportion of typical business forecasting efforts fail to improve the forecast, or even make it worse. So the conversation needs to change. The focus needs to change.

We need to shift our attention from esoteric model building to the forecasting process itself – its efficiency and its effectiveness.

Aphorism 2: Accuracy is Limited More by the Nature of the Behavior Being Forecast than by the Specific Method Being Used to Forecast

Under favorable conditions, demand can be forecast accurately. But under normal conditions, we may never reach the level of accuracy we desire – no matter how much data, statistical analysis, and human intervention we employ.

This is not our fault. It is the reality of dealing with the randomness and variability in what we are trying to forecast. It’s an issue of forecastability.

Perhaps the single best indicator of forecastability, (although it is rightfully criticized as imperfect), is the coefficient of variation (CV).

CV = Standard Deviation / Mean

CV is the ratio of a pattern’s standard deviation to its mean. It expresses the variability (or “volatility”) of a pattern over time. A flat line would have CV = 0. A highly erratic pattern may have CV of 100% or more.

The “comet chart” is an easy way to show the relationship between demand volatility and forecast accuracy at your organization.

Comet Chart(You can find instructions on how to create this chart in the BFD blog post "The Accuracy vs Volatility Scatterplot," or in The Business Forecasting Deal book.)

This scatterplot compares forecast accuracy (from 0 to 100% on the vertical axis), to the volatility of the sales pattern (as measured by the coefficient of variation) along the horizontal axis. It is based on one year of weekly data for 5000 skus at a consumer goods company.  The dots shows the volatility of the sales over the 52 weeks and the forecast accuracy achieved, for each of the 5000 skus.

As you can see, for skus with greater volatility (moving to the right in the plot), forecast accuracy tended to decrease.

That line is not a regression line. It shows the approximate accuracy you would have achieved, at each level of volatility, but using a naïve model. So the way to interpret this chart is that for skus falling above the line, they were adding value with their forecasting process – which was generally the case. However, skus falling below the line were being forecast worse than if they just used the naïve model.

This kind of analysis suggests that whatever we can do to reduce volatility in the demand for our products, the easier they should be to forecast.

Here is another example of a comet chart from a manufacturing company:

Comet ChartNote that they used MAPE rather than Forecast Accuracy for the vertical axis, so the tail of the comet points in a different direction. But the findings are consistent – the lower the volatility, the more accurate the forecasts.

[See all 12 posts in the business forecasting paradigms series.]

Post a Comment

Changing the paradigm for business forecasting (Part 9 of 12)

Academic Research

In an approach akin to FVA analysis, Paul Goodwin and Robert Fildes published a frequently cited study of four supply chain companies and 60,000 actual forecasts.* They found that 75% of the time an analyst adjusted the statistical forecast. They were trying to figure out, like FVA does, whether the judgmental adjustments made the forecast any better.

Results chartIn the chart, they divided the adjustments into quartiles, based on the size of the adjustment. They found that large adjustments (quartiles 3 and 4) tended to improve the forecast, particularly large downward adjustments shown in red. However small adjustments had virtually impact no impact on accuracy – either positive or negative -- which when you think about it makes perfect sense.

If you make a small adjustment, even if it is directionally correct, you’ve made at best a small improvement in accuracy. So why bother?

Identifying Waste in the Forecasting Process

We see that Forecast Value Added, Theil’s U, Relative Absolute Error, and the Fildes and Goodwin study, are all going after the same thing – trying to identify waste in the forecasting process.

Activities that demonstrably fail to improve the forecast – or even make it worse – can rightfully be characterized as worst practices.

Research Agenda Under the Defensive Paradigm

Kuhn said that a scientific community’s paradigm guides how they view the world, and legitimatizes the kinds of problems the community works on. Under the Offensive paradigm, the forecasting community worked on ever more complex models and methods to extract every last bit of accuracy from our forecasts. But progress eventually stalled. It seems unlikely that some magical new modeling algorithm is going to deliver us substantially better forecasts.

So the evidence is right in front of our noses – whether we choose to see it or not – that to improve the practice of business forecasting we need to move to a new paradigm. We need a paradigm shift.

Under the Defensive paradigm, the problems and puzzles we work on will be around worst practices. This becomes our new research agenda.

Confessing Your Worst Practices

The reason for all forecasting researchers and practitioners to confess their worst practice sins is so others can learn from them. We need the business forecasting community to be exploring, trying new things, and making new mistakes – not just forever repeating the old ones.

Forecasting is a fundamental business task, yet few companies perform it efficiently and well—or at least not as efficiently and well as they would like. Managers’ attention is often misdirected to the current fads, hype, snake-oil promises – like a 2016 Gartner Supply Chain Magic Quadrant report that made it sound like “big data” was going to solve all our forecasting problems. These distractions, this strict adherence to the Offensive paradigm, can blind us to alternative ways that can truly impact forecasting effectiveness.

*Fildes, R. and Goodwin, P. (2007). Good and Bad Judgment in Forecasting: Lessons from Four Companies. Foresight 8 (Fall 2007), 5-10.

[See all 12 posts in the business forecasting paradigms series.]

Post a Comment

Changing the paradigm for business forecasting (Part 8 of 12)

Typical Business Forecasting Process

Let’s look at a typical business forecasting process.

Typical Process

Historical data is fed into forecasting software which generates the "statistical" forecast. An analyst can review and override the forecast, which then goes into a more elaborate collaborative or consensus process for further adjustment. Many organizations also have a final executive review step, where the forecast can be adjusted again before the final approved forecast is sent to downstream planning systems.

The typical business forecasting process consumes large amounts of management time and company resources. And we know that business forecasting can be a highly politicized process, where participants inflict their biases and personal agendas on the computer generated number, especially if what the computer says is not what they want to see.

Under the Defensive paradigm, the important question is not “what accuracy did our process achieve?” but rather, “is our process adding value?” Through all these efforts, are we making the forecast more accurate and less biased?

How would you ever know?

Failings of Traditional Metrics

Common traditional metrics like MAD or MAPE, by themselves, do not answer that question. Morlidge has a nice quote, that “forecast performance can be quickly improved if you know where to look” but that “conventional metrics like MAPE shed little light on the issue.”

Sure, these common metrics can tell you the magnitude of your forecast errors. But by themselves, they don’t tell you how efficiently you are achieving the accuracy you reach, or whether you are forecasting any better than some cheaper alternative method.

These are the sorts of things that FVA analysis aims to find out.

What is FVA Analysis?

You can think of FVA analysis as the application of traditional scientific method to forecasting. You start with a null hypothesis:

Ho: The forecasting process has no effect

You then measure the results of steps in the process to determine if you can reject the null hypothesis, and thereby conclude that there is an effect, either good or bad. There is the nice analogy to testing a new drug for safety and efficacy:

Suppose you have a new cold pill.  You find 100 people with colds, randomly divide them into two groups, giving one group the pill and the other a placebo. You then track their outcomes, and if the ones who got the pill get over their colds much faster and suffered less severe symptoms, then you might conclude that the pill had a positive effect. However, if there is little difference between the groups, or if the ones taking the pill fare worse, you can probably conclude that the pill adds no value.

We are doing a similar thing in FVA analysis, with the naïve forecast serving as the placebo.

FVA Analysis: Simple Example

Let’s look at an example of a simple forecasting process:

Simple ProcessPerhaps the simplest process is to read demand history into a statistical forecasting model to generate a forecast, and then have an analyst review and possibly override the statistical forecast.

In FVA analysis you would compare the analyst’s override to the statistically generated forecast – is the override making the forecast better?

FVA Comparisons on Simple ProcessFVA analysis also compares both the statistical forecast and the analyst forecast to the naïve forecast – the placebo.

Stairstep Report

Here is an example of a "stairstep" report for showing the results of an FVA analysis.

Stairstep ReportThe first column shows the sequential process steps, and the second column shows the accuracy achieved at each step. The rightmost columns show the pairwise FVA comparisons. This can be used to report an individual time series, or as in this example, an aggregation of many series.

These numbers are the actual findings that a large consumer products reported at an Institute of Business Forecasting conference presentation in 2011, across all their products. As you see, the statistical forecast increased accuracy by 5 percentage points. But the manual adjustments to the statistical forecast reduced accuracy by 3 percentage points.

This is not an uncommon finding.

[See all 12 posts in the business forecasting paradigms series.]

Post a Comment

Changing the paradigm for business forecasting (Part 7 of 12)

The Means of the Defensive Paradigm

The Defensive paradigm pursues its objective by identifying and eliminating forecasting process waste. (Waste is defined as efforts that are failing to make the forecast more accurate and less biased, or are even making the forecast worse.)

In this context, it may seem ridiculous to be talking so much about naïve models. How difficult can it be to forecast better than doing nothing and just using the last observation as your forecast? When it comes to real-life business forecasting, this turns out to be surprisingly difficult!

The Green and Armstrong study affirmed what has long been recognized, that simple models can perform well. Of course, this doesn’t mean that a simple model will necessarily give you a highly accurate forecast. Some behaviors are highly erratic and essentially unforecastable, and no method will deliver highly accurate forecasts in those situations.

But at least the simple methods tend to forecast better than the complex ones.

The 52%

In a series of rather disturbing articles published in Foresight since 2013*, Steve Morlidge has painted a grim portrait of the state of real-life business forecasting. He studied 8 supply chain companies, encompassing 300,000 real-life forecasts that these companies were actually using to run their businesses. Morlidge found that a shocking 52% were less accurate than the no-change forecast!

How could this be?

You’d expect, just by chance, to sometimes forecast worse than doing nothing. But these companies were predominantly forecasting worse than doing nothing. Thankfully, Morlidge has not only exposed this problem, but guides us toward a way of dealing with it in his Foresight articles. (See in particular Morlidge (2016) in the footnotes below.)

Forecast Value Added

The Defensive paradigm aligns very well with exposing and weeding out of bad practices. It also aligns very well with one of the tools we can use to identify harmful practices, Forecast Value Added, or FVA analysis. Let’s take a few moments to understand the FVA approach.

Forecast Value Added is defined as:

The change in a forecasting performance metric that can be attributed to a particular step or participant in the forecasting process.

It is measured by comparing the results of a process activity to the results you would have achieved without doing the activity. So FVA can be positive or negative.

Relative Error Metrics

FVA is in the class of so-called “relative error” metrics, because it involves making comparisons.  A couple of others are:

  • Theil’s U, proposed over 50 years ago, can be characterized as the Root Mean Squared Error (RMSE) of your forecasting model, divided by the RMSE of the no-change model.

The interpretation is that:

  • The closer U is to zero, the better the model.
  • When U < 1, your model is adding value by forecasting better than the no-change model.
  • When U > 1, it means the model forecasts worse than doing nothing and just using the no-change model.

Another metric is:

  • Relative Absolute Error (RAE), which compares the absolute forecast error of a model to the absolute error that would have been achieved with a no-change model.

Interpretation of the RAE is similar to interpreting Theil’s U:

  • RAE closer to zero is better.
  • When RAE < 1, this corresponds to positive value added -- you are forecasting better than doing nothing.
  • However when RAE > 1, this means negative FVA, you are just making the forecast worse.

As a sidenote, Morlidge and Goodwin concluded that an RAE of 0.5 may be about the lowest forecast error you can ever expect to achieve. So best case performance is roughly cutting the error of the naïve forecast in half.

Morlidge has coined the term “avoidable error” as any error in excess of 0.5 RAE. Find more discussion in his Foresight articles.

*Morlidge, S. (2013). How Good is a "Good" Forecast? Forecast errors and their avoidability. Foresight 30 (Summer 2013), 5-11.

Morlidge, S. (2014a). Do Forecasting Methods Reduce Avoidable Error? Evidence from Forecasting Competitions. Foresight 32 (Winter 2014), 34-39.

Morlidge, S. (2014b). Forecastability and Forecast Quality in the Supply Chain. Foresight 33 (Spring 2014), 26-31.

Morlidge, S. (2014c). Using Relative Error Metrics to Improve Forecast Quality in the Supply Chain. Foresight 34 (Summer 2014), 39-46.

Morlidge, S. (2015a). Measuring the Quality of Intermittent Demand Forecasts. Foresight 37 (Spring 2015), 37-42.

Morlidge, S. (2015b). A Better Way to Assess the Quality of Demand Forecasts: It's Worse than We've Thought! Foresight 38 (Summer 2015), 15-20.

Morlidge, S. (2016). Using Error Analysis to Improve Forecast Performance. Foresight 41 (Spring 2016), 37-44.

[See all 12 posts in the business forecasting paradigms series.]

Post a Comment

Changing the paradigm for business forecasting (Part 6 of 12)

Why the Attraction for the Offensive Paradigm?

In addition to the reasons provided by Green and Armstrong, I'd like to add one more reason for the lure of complexity:

  • You can always add complexity to a model to better fit the history.

In fact, you can always create a model that fits the time series history perfectly. But exceptional fit to history is no reason to believe a model is appropriate for forecasting the future.

Four alternative models

Four Alternative Models

Closely fitting a model to history is one of the dirty tricks of selling forecasting services or software -- getting the client to think that a close fit to history (which is easy to do) is proof of a good forecasting model. But it isn’t. While fit to history is a relevant consideration, it shouldn’t be the sole consideration in model selection. Consider this example:

There are four historical data points, with sales of 5, 6, 4, and 7 units. To forecast future sales, we build four models that progressively improve the fit to history, including a perfect fit to history.

Which model should we select to generate forecasts? I'd argue that the two best fitting models are the least appropriate -- the forecasts they generate are extremely optimistic. In the absence of any other information, only the two worst fitting models look reasonable.

The New Defensive Paradigm for Business Forecasting

Hopefully it is not going to take 100 years to make the shift, but I want to propose a new “Defensive” paradigm for business forecasting.

I’m talking about “defensive” in the sense of “playing defense” in sports – where you are trying to prevent bad things from happening, like your opponent scoring. This isn't the psychological / emotional sense of the word – although we sometimes have to get defensive and emotional in justifying our forecasts.

One of the linchpins of the new Defensive paradigm is that there is much less interest in forecast accuracy in itself. It is recognized that the accuracy you achieve is limited by the nature of the behavior you are forecasting, its “forecastability.” So instead of focusing on the level of accuracy itself, you focus on whether you achieve a level of accuracy that is “reasonable to expect” given the nature of what you are forecasting.

Under the Defensive paradigm a statement such as “I achieved a MAPE of 20%” is not very interesting or useful.

Under the new paradigm, the forecaster is more concerned about their performance relative to simpler and cheaper alternative forecasting methods, and to benchmarks like a naïve model.

Role of the Naive Model

The random walk or  “no-change” model is generally accepted as the ultimate point of comparison. The no-change model uses your latest observation as the forecast for future values. So if you sold 100 units last month your forecast for this month is 100. And so on.

The no-change model is generally accepted as the upper bound on the forecast error you should be achieving.

It is the do nothing forecast. It can be computed with virtually no effort or cost – it is essentially a free forecasting method. As such it provides the worst case – the accuracy you can achieve by doing nothing (and just using the latest observation).

The question is: If you are spending time and money with a forecasting process that performs worse than the naïve model…why bother?

The Objective

While the Offensive paradigm is about trying to do more, the Defensive paradigm is about trying to do less. It sees the objective of forecasting as to generate forecasts as accurate as can reasonably be expected (given the nature of what you’re trying to forecast)…and do this as efficiently as possible.

The Defensive paradigm acknowledges the limits of what forecasting can deliver – and recognizes the foolishness of unreasonable accuracy expectations. For example,

Suppose you work for some strange company in the business of flipping FAIR coins. Your job is to forecast Heads or Tails for each daily flip, and over a long career you’ve forecasted correctly just about 50% of the time. You get a new boss who insists you increase your forecast accuracy to 60%.

So what do you do?

You get fired – because other than a lucky streak now and then, a long term average of 50% is the best, in fact the ONLY level of accuracy you can achieve given the nature of what you are trying to forecast. Any effort to try to improve your forecast is just a waste of time.

[See all 12 posts in the business forecasting paradigms series.]

Post a Comment

Guest blogger Len Tashman previews Fall 2016 issue of Foresight

Fresh from chairing the Foresight Practitioner Conference on “Worst Practices in Business Forecasting,” hosted two weeks ago at the Institute for Advanced Analytics at North Carolina State University, Foresight editor-in-chief Len Tashman previews the Fall 2016 issue.

Preview of the Fall 2016 issue of Foresight

In the provocative article, “The Impact of Strategy on Supply Chain and Forecasting,” Bram Desmet explores how a company’s market strategy affects its supply chain targets and forecasting methodology. The author introduces the concept of the supply chain triangle to illustrate the balancing act a company must perform to achieve the cost, service, and inventory mix that maximizes its return on capital employed. He then shows how the company’s strategic choice, be it operational excellence, product leadership, or customer intimacy, influences the position it seeks on the supply chain triangle and, in particular, its inventory targets.

In 2011, Shell Lubricants established a Central Forecasting Team (CFT) to deal with unacceptable forecasting performance. Alex Hancock worked with this team for four years, and the group eventually engineered a turnaround effort that identified and reformed practices blamed for much organizational pain. In “Forecast Process Improvement at Shell Lubricants,” Alex reflects on the fits and starts at the CFT and reveals key lessons learned for reforming the forecasting function.

We know that Sales and Operations Planning (S&OP) is a cross-functional process, bringing a company’s demand- and supply-side people together to reach consensus on the demand forecasts and operating assumptions. The participants typically engage in a series of meetings across each month, and the process outcome is determined by how effectively the individuals collaborate within the team. As Scott Ambrose writes in “Achieving S&OP Success: How Principles of Team Effectiveness Can Help,” principles of team effectiveness have been widely studied, but not applied previously to S&OP. Scott’s article examines how recognition and implementation of these principles can improve S&OP collaboration and performance.

In this fascinating discussion, “Mission-Based Forecasting: Demand Forecasting for Military Operations,” military veteran and OR expert Greg Parlier examines the application of demand forecasting and inventory management in support of military operations. The commercial “point of sale” becomes the “point of readiness generation” for the military. And customer- demand forecasting becomes mission-based forecasting. Greg highlights problems that have inhibited the supply chain’s ability to achieve mission readiness. Among the most serious is the absence of historical data on demand and consequent inability to implement effective demand forecasting and planning procedures. It is encouraging that the military has been learning important supply-chain lessons from the business world’s application of OR techniques.

“We should be making today what was sold yesterday, and shipping it tomorrow”— this is Joe Roy’s call to manufacturers and distributors of products in his article, “Sales Forecasts for the Consumer Chain: Are We Kidding Ourselves?” Joe argues that rapid response to changes in consumer demand should supersede the traditional supply-chain goal of establishing inventory targets based on sales forecasts. Furthermore, he states that the problem with sales forecasts is that they are typically for longer periods than required and that they serve as a crutch to manufacturing’s lack of responsiveness to demand. He advocates that, in a “consumer chain,” time means next day!

Numerous forecasting support systems (FSSs) have been developed through the years to help companies select and implement forecasting procedures and to support managerial decisions. While the majority of these systems are off-the-shelf, Evangelos Spiliotis, Achilleas Raptis, and Vassilios Assimakopoulos argue that such generic systems will not always be up to the task. As they assert in “Off-the-Shelf vs. Customized Forecasting Support Systems,” problems can arise due to lack of ustomizability, inadequate Web-based architecture, and poor user interfaces.

The authors have developed a Web-based FSS specifically to forecast water consumption (in the province of Attica, Greece). In doing so, they took as a springboard many of the proposals for the design of an FSS presented in the special feature on FSS in Foresight’s Fall 2015 issue.

Introducing New Associate Editor Chris Gray

With our 43rd issue, Foresight welcomes new Associate Editor Chris Gray.

photo of Chris Gray

Chris Gray

Chris is President of Gray Research, a founder of Partners for Excellence and a founder of Worldwide Excellence Partners (WWXP), a global confederation of independent experts devoted to sharing knowledge and experiences on proven, profitable management processes. He has authored or coauthored six books, written dozens of articles and software evaluations, and run numerous seminars and workshops covering S&OP, demand planning, MRP, enterprise software, lean manufacturing and other supply chain issues. Chris is a past President of the North Shore chapter of APICS and was certified as a Fellow (CFPIM) by APICS in 1980.

His three books in the 1980s, including MRP II Standard System, A Handbook for Manufacturing Software Survival (which he coauthored with Darryl Landvater), have defined the standards for resource planning software. He also developed and taught the MRP II Software Survival Course, a class covering software evaluation and selection, software trends, and the role of systems people in implementing effective systems.

In 2006, Chris and fellow Partner of Excellence John Dougherty published Sales & Operations Planning—Best Practices, a book based on their examination of planning practices in 13 companies around the world that became a highly influential text on S&OP. In one of two reviews in Foresight’s Winter 2009 issue, John Mello wrote, “Showing how S&OP really has made a difference in the corporate world is what sets this book apart from those that merely describe how S&OP works, or is supposed to work.”

Chris joins longtime Associate Editor Stephan Kolassa as the main guardians of quality assurance in Foresight publications.

photo of Stephan Kolassa

Stephan Kolassa

Stephan's day job at SAP AG in Switzerland is Research Expert, responsible for statistical and time series forecasting of SKU/store level data in the retail sector, as well as price optimization, assortment planning, and replenishment. Stephan is a member and current Secretary of the Board of Directors of the International Institute of Forecasters, publisher of Foresight. He is a prolific contributor of methodological research to a range of scholarly journals.

In his spare time, he has authored or coauthored nearly a dozen articles and commentaries for Foresight covering forecast accuracy metrics, benchmarking, simplicity in modeling, and forecasting support systems. As Associate Editor he has reviewed and edited more than 50 invited and submitted articles.

Stephan and coauthor Enno Siemsen’s new book, Demand Forecasting for Managers, has just been published by Business Expert Press and will be reviewed in Foresight’s Winter 2017 issue. The book is intended as an introduction to forecasting for the non-expert, such as a manager overseeing a forecasting group or an MBA student.

Post a Comment

Changing the paradigm for business forecasting (Part 5 of 12)

Implications for the Offensive Paradigm

The worldview promulgated by the Offensive paradigm is that if we only had MORE – more data, more computational power, more complex models, more elaborate processes – we could eventually solve the business forecasting problem. But this just doesn’t seem to be the case.

Operating within the Offensive paradigm, we would expect the focus on MORE would generate MORE accurate forecasts. But it doesn’t.

You can always cherry pick instances where a particular technique, with a particular dataset, under particular conditions, generate forecasts over a particular time period that are particularly good. Paul Goodwin’s article points out several of those

Changing the Paradigm for Business Forecasting

For over a half century we have been operating within a paradigm that, to a large extent, esteems complexity in forecasting. Yet we are now facing the evidence that complexity isn’t working like we need it to be – it’s no longer substantially improving our forecasts. The Offensive paradigm may have run its course.

Kuhn says that a discredited paradigm does not just go away without a fight. The transition can take a long time, and the old one does not go away until it is replaced by something new. He quotes the physicist Max Planck, “[A] new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.”

I’m certainly not wishing ill health on followers of the Offensive paradigm! But Planck has a point.

Copernicus published his heliocentric theory – placing the Sun rather than the Earth at the center of the universe – in the early 1500’s. But it took over a century before it was generally accepted and replaced the geocentric paradigm. Isaac Newton’s ideas were not generally accepted for over 50 years.

Why the Attraction for the Offensive Paradigm

So why the continuing attraction for the Offensive paradigm? Green and Armstrong took a look at that, too, in a section of their paper headed, “Why simplicity repels and complexity lures.”

They suggest that the popularity of complexity may be due to incentives:

  • Some people are reassured by complexity.

If you are consulting for a company, and suggest a forecasting method that is intuitive, reasonable, and simple, the client might ask why am I paying you? I could do that myself!

  • There may be resistance to simple methods.

A 1972 paper stated that “Few people would accept the naïve no-change model even if it were clearly shown to be more accurate.”*

A 2012 paper found that senior academics resisted overwhelming evidence that simple methods provide forecasts that are more accurate than those from complex ones.**

  • Complexity can be persuasive.

In the famous “Dr. Fox” study from 1973, a complex lecture was given high ratings even though the content was nonsense. Respondents said that while they did not understand everything Dr. Fox said, the guy knew his stuff.***

In 1980 Armstrong did a similar study using simple and complex versions of papers with identical content. The academic reviewers rated the papers more highly when written in more complex ways.****

Another experiment in 2012 used two versions of an abstract, one which included a sentence from an unrelated paper that contained an algebraic equation. 200 reviewers, all of whom had post graduate degrees in the subject matter, judged the abstract with the nonsense mathematics to be of higher quality.*****

  • You may be able to advance your career by writing in a complex way.

Software developed by MIT students randomly selects common complex words and applies grammar rules to produce research papers on computer science. At least 120 such computer generated papers have been published in peer reviewed journals.

  • To support decision maker's plans

If you are providing forecasts for someone who already knows what they want to do, you can always concoct a model that generates forecasts to justify their decision.

*Juster, F.T. (1972). An evaluation of the recent record in short-term forecasting. Business Economics, 7(3), 22-26.

**Hogarth, R.M. (2012). When simple is hard to accept. In Todd, Gigerenzer, & Group (Eds.), Ecological Rationality: Intelligence in the World (pp. 61-79).

***Naftulin, D.H., Ware, J.E., Jr., & Donnelly, F.A. (1973). The Doctor Fox lecture: a paradigm of educational seduction. Journal of Medical Education, 48, 630-635.

****Armstrong, J.S. (1980). Unintelligble management research and academic prestige. Interfaces, 10(2), 80-86.

*****Eriksson, K. (2012). The nonsense math effect. Judgment and Decision Making 7(6), 746-749.

[See all 12 posts in the business forecasting paradigms series.]

Post a Comment

Changing the paradigm for business forecasting (Part 4 of 12)

Is Complexity Bad?

It’s necessary to point out that Goodwin’s article is not arguing against complexity per se, and I’m not either.

When you have a high value forecast, where it is critical to be as accurate as possible, of course you are going to want to try every technique available to secure a good forecast. But it’s important to remember that complexity has a cost in time and effort and resources to prepare forecasts.

And while highly complex methods may impress some people, they may also be incomprehensible to managers, perhaps lessening their trust and willingness to use a forecast generated in a way they don’t understand. As Goodwin says, “It is therefore vital that recommendations to use complex methods be supported with strong evidence about their reliability.”

Simple vs. Complex Forecasting

Perhaps the most troubling problem with complex methods is that they don’t seem to forecast any better than simpler methods? Last year the Journal of Business Research published a special issue on simple versus complex methods in forecasting, led by an analysis of the evidence by Kesten Green and Scott Armstrong.

Green and Armstrong note that simplicity in forecasting seems easy to recognize, yet difficult to define. However to make practical distinctions between simple and complex forecasting, they come up with this:

Simple forecasting ≡ processes that are understandable to forecast users.

They even developed a Forecasting Simplicity Questionnaire available online so you can quiz users on whether they understand and can explain a forecasting method.

Do Simple Methods Forecast Better?

Of course the thing we really want to know is how accurate are simple forecasting methods, and do they outperform complex methods.

To find out, the authors spent two years seeking every comparative study they could find, and ended up reviewing 32 papers reporting on 97 simple versus complex comparisons. Of course, a good approach in any endeavor, forecasting or not, is to start simple and add complexity only if needed. This is just applying Occam’s Razor – do not multiply entities needlessly!

Green and Armstrong found that:

  • None of the papers provide a balance of evidence that complexity improves forecast accuracy.
  • Remarkably, no matter what type of forecasting method is used, complexity harms accuracy.
  • ...the need for complexity has not arisen.

They state:

During our more than two years working on this special issue, we made repeated requests for experimental evidence that complexity improves forecast accuracy under some conditions... We have not been able to find such papers.

Here is a summary of their results:

Table of Findings

Although they expected forecasts from simple methods would “generally tend to be somewhat more accurate,” the papers they studied were consistent in finding in favor of simple methods – to the extent that complex methods increased error by nearly 27% on average.

Green and Armstrong state that they found this astonishing.

[See all 12 posts in the business forecasting paradigms series.]

Post a Comment
  • About this blog

    Michael Gilliland is a longtime business forecasting practitioner and currently Product Marketing Manager for SAS Forecasting. He initiated The Business Forecasting Deal to help expose the seamy underbelly of the forecasting practice, and to provide practical solutions to its most vexing problems.

    Mike is also the author of The Business Forecasting Deal, and co-editor of Business Forecasting: Practical Problems and Solutions. He also edits the Forecasting Practice section of Foresight: The International Journal of Applied Forecasting.
  • Subscribe to this blog

    Enter your email address:

    Other subscription options

  • Archives