Getting real about uncertainty

Paul Goodwin

Paul Goodwin

In his Spring 2014 article in Foresight, Paul Goodwin addressed the important issue of point vs. probabilistic forecasts.

A point forecast is a single number (e.g., the forecast for item XYZ in December is 635 units). We are all familiar with point forecasts, as these are what's commonly produced (either by the software, or judgment) in our forecasting processes.

The problem is that point forecasts provide no indication of the uncertainty in the number, and uncertainty is an important consideration in business planning. Knowing that the forecasting is 635 +/- 50 units may lead to dramatically different decisions than if the forecast were 635 +/- 500 units.

There are a number of ways to provide a probabilistic view of the forecast. With prediction intervals, the forecast is presented as a range of values with an associated probability. For example, "the 90% prediction interval for item XYZ in December is 635 +/- 500 units." This would indicate much less certainty in the forecast than if the 90% prediction interval were 635 +/- 50 units.

A fan chart provides a visual expansion of a prediction interval over multiple time periods. In Goodwin's example, the darkest band represents the 50% prediction interval, while the wider ranges show the 75% and 95% intervals.

Fan Chart

Fan Chart

A probability density chart can provide even more granular detail on the forecast for a single time period. In this example, the most probably sales are around 500 units, but will almost certainly be between 200 and 1200.

Density Forecast

Density Forecast

Goodwin reviews recent literature on the value of communicating uncertainty, and suggests that it can lead to improved decisions. However, research also showed that if interval forecasts are too wide, they were judged to be uninformative and less credible. So even if 635 +/- 500 is the appropriate width of the 90% prediction interval, decision makers may simply ignore the forecast and doubt the competence of the forecaster who produced it!

Estimating the level of uncertainty may be non-trivial, particularly when forecasts are based on human judgment. Research has repeatedly shown that people produce intervals that are far too narrow. Goodwin cites a 2013 study of financial executives providing 80% prediction intervals for one-year-ahead stock prices. Actual returns fell within the 80% intervals only 36% of the time!

A simple way to address inappropriately narrow intervals was suggested by Makridakis, Hogarth, & Gaba in one of my favorite forecasting-related books, Dance With Chance. They suggest taking your estimated prediction interval and doubling it. (I love a quick and dirty solution to a complex problem.)

Learn more in the next Foresight / SAS Webinar

On December 3, 10:00am ET, Paul Goodwin will present his finding in the next installment of the Foresight / SAS Webinar Series.

This webinar will discuss recent research suggesting that people make better decisions when forecasts contain information on uncertainty. It will demonstrate how to:

  • Estimate uncertainty.
  • Use forecasting uncertainty to your advantage.
  • Present forecasts in ways that are credible, understandable and useful.

Register for "Getting Real About Uncertainty in Forecasting" and watch the 30-second video preview.


Post a Comment

SAS Summer Fellowships in Forecasting and Econometrics

The Advanced Analytics division of SAS Research & Development has announced three Summer Fellowships in the areas of Forecasting and Econometrics.

The SAS forecasting fellowships are open to doctoral candidates in mathematics, statistics, computer science, and related graduate departments in the United States. They offer the opportunity to work closely with professional statisticians and computer scientists who develop software used throughout the world.

SAS Short-Life-Cycle Time Series Forecasting Fellowship

The forecasting fellow will contribute to research, develop, and document state-space analysis techniques for panels of short-life-cycle time series. Duties include working with SAS statistical forecasting developers to aid statistical computing research initiatives. The program provides an excellent opportunity to explore software development as a career choice.

SAS Energy Forecasting Fellowship

The forecasting fellow will contribute to research, develop, and document for energy forecasting using high-frequency data. Duties include working with SAS statistical forecasting developers to aid statistical computing research initiatives. The program provides an excellent opportunity to explore software development as a career choice.

SAS Econometrics Fellowship

Open to doctoral candidates in economics, statistics, finance and related graduate programs in the United States, this fellowship offers the opportunity to work closely with professional econometricians and statisticians who develop SAS software used throughout the world. The Econometrics Fellow will contribute to activities such as research, numerical validation and testing, documentation, creating examples of applying SAS econometric software, and assisting with software development work. The specific projects assigned may be adjusted to the skills and interests of the Fellow selected. The program provides an excellent opportunity to explore software development as a career choice.

Apply for these (or other SAS jobs and advanced analytics fellowships) at the SAS Careers website.

Deadline for fellowship applications is January 23, 2015.

Post a Comment

Guest blogger: Len Tashman previews Fall 2014 issue of Foresight

Len TashmanIn 2015 Foresight: The International Journal of Applied Forecasting will celebrate 10 years of publication. From high in his aerie in the Colorado Rockies, here is Editor-in-Chief Len Tashman's preview of the current issue:

In this 35th issue of Foresight, we revisit a topic that always generates lively and entertaining discourse, one where business experience has been far more enlightening than academic research: the question of the proper Role of the Sales Force in Sales Forecasting. Our feature article by Mike Gilliland formulates the key aspects of the issue with three questions: Do salespeople have the ability to accurately predict their customers’ future buying behavior, as many assume they do? Will salespeople provide an honest forecast? And does improving customer-level forecasts improve company performance? Incisive commentaries follow Mike’s piece, contributed by forecast directors at three companies.

As Foresight Editor, I welcome continued discussion from our readers on your experiences and lessons learned at your own organizations.

Paul Goodwin’s Hot New Research column addresses a promising new method for properly representing the uncertainty behind a forecast. Called SPIES (Subjective Probability Interval Estimates), it offers a more intuitive way (than standard statistical approaches) for forecasters to determine and present the probability distribution of their forecast errors. I think you’ll find it provocative.

Our section on Forecasting Support Systems features the article Data-Cube Forecasting for the Forecasting Support System. Noted Russian consultant Igor Gusakov draws on his many years at CPG companies to show how we can achieve the best of what are now two distinct worlds, by synthesizing statistical forecasting capabilities with the OLAP (online analytical processing) tools now commonly used for business intelligence and reporting. Data cubes provide the requisite infrastructure.

Igor is also the subject of our Forecaster in the Field interview.

Our Summer 2014 issue included the first part of a feature section on Forecasting by Aggregation. Two articles there examined “temporal aggregation” opportunities, which deal with the choices of time dimension (daily, weekly, monthly, etc.) for forecasting demands.* Now we present Part Two on Forecasting by Cross-Section Aggregation within a product hierarchy. Giulio Zotteri, Matteo Kalchschmidt, and Nicola Saccani question the usual belief that the level of aggregation for forecasting is specified by the operational requirements of the company. Rather, they argue – quite convincingly – that the best level of aggregation for forecasting should be chosen by the forecasters in an attempt to balance the errors from forecasting with data at too granular a level with those at too aggregate a level.

Rob J. Hyndman and George Athanasopoulos extend the discussion by presenting a way for Optimally Reconciling Forecasts in a Hierarchy. Rarely will the sum of forecasts at a granular level equal the forecast at the group level; hence reconciliation is necessary. The authors argue that traditional reconciliation methods – bottom-up, top-down, and middle-out – fail to make the best use of available data. Their optimal reconciliation is based on a weighted average of forecasts made at all different levels of the hierarchy.


*Aris Syntetos will discuss "Forecasting by Temporal Aggregation" in the Foresight/SAS Webinar Series on October 30, 11:00am ET.

Post a Comment

Foresight/SAS webinar October 30 on temporal aggregation

On Thursday, October 30, 11 am ET, Aris Syntetos delivers the next installment of the Foresight/SAS Webinar Series, "Forecasting by Temporal Aggregation." Based on his article in the Summer 2014 issue of Foresight, Aris provided this preview:

When we attempt to improve forecast performance we usually consider new or alternative forecasting methods. However, how about not changing the forecasting methods but rather changing our approach to forecasting altogether (by keeping the forecasting methods the same)?

Have you heard about, or ever considered, forecasting by aggregating demand in lower frequency time units (say weekly into monthly demand)? Aggregating demand will almost always reduce demand uncertainty. It also helps to see the linkages between the output of the forecasting process (forecasts needed to support specific decisions) and input (data available to produce the forecasts).

Best of all, it is highly likely that temporal aggregation will improve forecast performance. Ready to see how?  Join us in this on-demand webinar to understand:

  • How forecasting by temporal aggregation works;
  • The different types of temporal aggregation (overlapping and non-overlapping);
  • The data requirements for supporting forecasting by temporal aggregation;
  • The linkage between decision making requirements and the time periods in which demand is recorded.

Aris Syntetos

Aris is Professor of Operational Research and Operations Management at Cardiff University, UK. His research interests relate to supply chain forecasting and its interface with inventory management and he has advised many commercial organizations in this area. In addition, forecasting, demand classification and stock control algorithms co-developed by him have been implemented or are currently considered for implementation by commercial software packages.

Aris is the supply-chain forecasting editor for Foresight. He serves at the Executive Committee of the International Society for Inventory Research (ISIR) and at the Board of Directors of the International Institute of Forecasters (IIF).

Check out the video preview of the presentation, and register for free.

For more discussion of this topic, and to learn how to apply the approach in SAS Forecast Server software, see the BFD post "Forecasting across a time hierarchy with temporal reconciliation."

Post a Comment

Significant Digits in Business Forecasting

My favorite dog trick is the group hug. This is achieved by taking a dog's bad habit (rising up on strangers who don't really want a 70# dog rising up on them), and "flipping it" into something cute and adorable. It's all a matter of controlling perception, and that is something forecasters are good at in the abuse of significant digits.

Do You Really Need All Those Digits?

Armstrong's Principles of ForecastingIn "The Forecasting Dictionary," Armstrong1 cites a 1990 study by Teigen2 that showed more precise reporting can make a forecast more acceptable, unless such precision seems unreasonable. Thus, management may have more faith in my competence (and the accuracy of my forecasts) if I project revenues of $147,000, $263,000, and $72,500 for my three products (rather than $100,000, $300,000, and $70,000.). However, they'll likely see through my ruse if I forecast $147,362.27, $262,951.85, and $72,515.03. (Not even the most naïve executive believes a forecast can be that that close to being right!)

So how many significant digits should we use in reporting our forecasts? More digits (as long as there aren't too many!) can give a false sense of precision, implying we have more confidence in a forecast when such confidence isn't really merited. Fewer digits are just another way of saying we don't have much confidence at all in our forecast, implying a wide prediction interval in which the actual value will fall.

Armstrong (p. 810) suggests "A good rule of thumb is to use three significant digits unless the measures do have greater precision and the added precision is needed by the decision maker." I suspect there are very few circumstances in which more than three digits are needed. You are probably safe to configure your forecasting software to show only three significant digits.

Think of it this way: Suppose you can create perfect forecasts (100% accurate), but are limited to just three significant digits. What is the worst case forecast error?

If the perfect forecast is 1005, but you are limited to three significant digits, then the forecast would be 1010 or 1000 (depending on whether you round up or down). When the actual turns out to be 1005 (as predicted with the perfect forecast), then your forecast error is 0.5%.

If the perfect forecast is 9995, then the three significant digit forecast is 10,000 or 9990 (depending on whether you round up or down). When the actual turns out to be 9995 (as we predicted with the perfect forecast), your forecast error is 0.05%.

Limiting the forecast to three significant digits creates at most a 0.5% forecast error (and often considerably less than that). What forecaster wouldn't sell his soul for a 0.5% error? Given that typical business forecast errors are in the 20-50% range, another half a percent (worse case!) is nothing.

To conclude: In the continuing pursuit to streamline the forecasting process, and make it appear less ridiculous, get rid of all those extraneous decimal points and digits, and limit your forecasts to just three.


1In Armstrong, J.S. (2001), Principles of Forecasting. Kluwer Academic Publishers, pp. 761-824.

2Teigen, K.H. (1990), "To be convincing or to be right: A question of preciseness," in K.J. Gilhooly, M.T.G. Keane, R.H. Logie & G. Erdos (eds.), Lines of Thinking. Chichester: John Wiley, pp. 299-313.




Post a Comment

5 steps to setting forecasting performance objectives (Part 2)

And now for the five steps:

1. Ignore industry benchmarks, past performance, arbitrary objectives, and what management "needs" your accuracy to be.

Published benchmarks of industry forecasting performance are not relevant. See this prior post The perils of forecasting benchmarks for explanation.

Previous forecasting performance may be interesting to know, but not relevant to setting next year's objectives. We have no guarantee that next year's data will be equally forecastable. For example, what if a retailer switches a product from everyday low pricing (which generated stable demand) to high-low pricing (where alternating on and off promotion will generate highly volatile demand). You cannot expect to forecast the volatile demand as accurately as the stable demand.

And of course, arbitrary objectives (like "All MAPEs < 20%") or what management "feels it needs" to run a profitable business, are inappropriate.

2. Consider forecastability...but realize you don't know what it will be next year.

Forecast accuracy objectives should be set based on the "forecastability" of what you are trying to forecast. If something has smooth and stable behavior, then we ought to be able to forecast it quite accurately. If it has wild, volatile, erratic behavior, then we can't have such lofty accuracy expectations.

While it is easy to look back on history and see which patterns were more or less forecastable, we don't have that knowledge of the future. We don't know, in advance, whether product X or product Y will prove to be more forecastable, so we can't set a specific accuracy target for them.

3. Do no worse than the naïve model.

Every forecaster should be required to take the oath, "First, do no harm." Doing harm is doing something that makes the results worse than doing nothing. And in forecasting, doing nothing is utilizing the naïve model (i.e., random walk, aka "no change"model) where your forecast of the future is your most recent "actual" value. (So if you sold 50 last week, your forecast for future weeks is 50. If you actually sell 60 this week, your forecast for future weeks becomes 60. Etc.)

You don't need fancy systems or people or processes to generate a naïve forecast -- it is essentially free. So the most basic (and most pathetic) minimum performance requirement for any forecaster is to do no worse than the naïve forecast.

4. Irritate management by not committing to specific numerical forecast accuracy objectives.

It is generally agreed that a forecasting process should do no worse than the naïve model. Yet in real life, perhaps 50% of business forecasts fail to achieve this embarrassingly low threshold. (See Steve Morlidge's recent articles in Foresight, which have been covered in previous BFD blog posts). Since we do not yet know how well the naïve model will forecast in 2015, we cannot set a specific numerical accuracy objective. So the 2015 objective can only be "Do no worse than the naïve model."

If you are a forecaster, it can be reckless and career threatening to commit to a more specific objective.

5. Track performance over time.

Once we are into 2015 and the "actuals" start rolling in each period, we can compare our forecasting performance to the performance of the naïve model. Of course you cannot jump to any conclusions with just a few periods of data, but over time you may be able to discern whether you, or the naïve model, is performing better.

Always start your analysis with the null hypotheses, H0: There is no difference in performance. Until there is sufficient data to reject H0, you cannot claim to be doing better (or worse) than the naïve model.

Promotional Discount for APICS2014APICS2014 logo

If you'd like to hear more about setting performance objectives, and talk in person about your issues, please join me at APICS2014 in New Orleans (October 19-22). As a conference speaker, I've been authorized to provide an RSVP code NOLAF&F for you to receive a $100 discount off your registration at

My presentation "What Management Must Know About Forecasting" is Sunday October 19, 8:00-9:15 am -- just in time for you to be staggering in from Bourbon St.


Post a Comment

5 steps to setting forecasting performance objectives (Part 1)

It is after Labor Day in the US, meaning we must no longer wear white shoes, skirts, jackets, or trousers. But even if you are now going sans-culotte, it is time to begin thinking about organizational performance objectives for 2015.

Setting forecasting performance objectives is one way for management to shine...or to demonstrate an abysmal lack of understanding of the forecasting problem. Inappropriate performance objectives can provide undue rewards (if they are too easy to achieve), or can serve to demoralize employees and encourage them to cheat (when they are too difficult or impossible). For example:

Suppose you have the peculiar job of forecasting Heads or Tails in the daily toss of a fair coin. While you sometimes get on a hot streak and forecast correctly for a few days in a row, you also hit cold streaks, where you are wrong on several consecutive days. But overall, over the course of a long career, you forecast correctly just about 50% of the time.

If your manager had been satisfied with 40% forecast accuracy, then you would have enjoyed many years of excellent bonuses for doing nothing. Because of the nature of the process -- the tossing of a fair coin -- it took no skill to achieve 50% accuracy. (By one definition, if doing something requires "skill" then you can purposely do poorly at it. Since you could not purposely call the tossing of a fair coin only 40% of the time, performance is not due to skill but to luck. See Michael J. Mauboussin's The Success Equation for more thorough discussion of skill vs. luck.)

If you get a new manager who sets your goal at 60% accuracy, then you either need to find a new job or figure out how to cheat. Because again, by the nature of the process of tossing a fair coin, your long term forecasting performance can be nothing other than 50%. Achieving 60% accuracy is impossible.

So just how do you set appropriate objectives for forecasting? We'll learn the five steps in Part 2.

Reminder: Foresight Practitioner Conference (Oct. 8-9, Columbus, OH)

This year's topic is "From S&OP to Demand-Supply Integration: Collaboration Across the Supply Chain" and features an agenda of experienced practitioners and noted authors. The conference provides a unique vendor-free experience, so you don't have to worry about me (or similar hucksters) being there harassing you from an exhibitor booth.

Register between September 2-9 with code 2014Foresight and receive a $200 discount.

The Foresight and Ohio State University hosts provide an exceptional 1.5 day professional-development opportunity.

See the program and speakers at

Conference Logo


Post a Comment

Reminder: September 30 deadline for IIF/SAS Research Grants

I wanted to pass along this reminder from Pam Stroud at the International Institute of Forecasters:

Grant to Promote Research on Forecasting

Pam Stroud

Pam Stroud

For the twelfth year, the IIF, in collaboration with SAS®, is proud to announce financial support for research on how to improve forecasting methods and business forecasting practice. The award for this year will be (2) two $5,000 grants. The application deadline for the grant year 2014-2015 is September 30, 2014.  For more information visit the IIF website,

Industry forecasting practitioners should also consider applying for these grants. As a practitioner you have access to a wealth of your own company data -- the kind of data that academics have difficulty getting access to. Analysis of volatility, FVA, and the real-life performance of particular models, forecasting systems, and forecasting process would all be of interest.

Post a Comment

Forecasting and supply chain blogs

In the summer heat, when The BFD alone isn't quite quenching your thirst for forecasting know-how, here are several other sources:


Steve Morlidge

Steve Morlidge

CatchBlog -- by Steve Morlidge of CatchBull

From his 2010 book Future Ready (co-authored with Steve Player), to his recent 4-part series in Foresight dealing with the "avoidability" of forecast error, Steve is delivering cutting edge content in the area of forecast performance management. He also contributed a Foresight/SAS Webinar available for on-demand review.

Eric Stellwagen

Eric Stellwagen

The Forecast Pro -- Eric Stellwagen of Business Forecast Systems

Eric delivers great practical tutorials on various forecasting topics through his blog, and also in conference presentations and articles (including 2 chapters in the Foresight Forecasting Methods Tutorials compilation).

IBF Blog -- various industry contributors

Good source for previews to upcoming Institute of Business Forecasting events, contributed by conference speakers. Also event recaps by conference attendees, and previews of the Journal of Business Forecasting.

Shaun Snapp

Shaun Snapp

SCM Focus Blogs -- by Shaun Snapp of SCM Focus

Shaun is a prolific writer of blogs (he maintains several on his website) and books (most recently Rethinking Enterprise Software Risk), and provides a sharp critical eye on IT, ERP, supply chain, and forecasting issues.

Lora Cecere

Lora Cecere

Supply Chain Shaman -- by Lora Cecere of Supply Chain Insights

Lora is a long time industry analyst and, like Shaun, is another prolific writer covering a wide range of topics. Her recent ebook The Supply Chain Shaman's Journal: A Focused Look at Demand includes some nice mention of FVA.




Post a Comment

IIF/SAS grants to support research on forecasting

IIF LogoAgain this year (for the 12th time), SAS Research & Development has funded two $5,000 research grants, to be awarded by the International Institute of Forecasters.

  • Criteria for award of the grant will include likely impact on forecasting methods and business applications.
  • Consideration will be given to new researchers in the field and whether supplementary funding is possible.
  • The application must include: Project description, letter of support, C.V., and budget for the project.

Applications should be submitted to the IIF office by September 30, 2014 in electronic format to:

Pamela Stroud, Business Director,

For more information:

See Pamela's IIF blog for information on 2013 grant winners Jeffrey Stonebraker (North Carolina State University, USA) and Yongchen (Herbert) Zhao (University at Albany, USA). The more information link also provides details on past grant recipients, including such forecasting royalty as Robert Fildes, Dave Dickey, Sven Crone, Paul Goodwin (who delivered the inaugural Foresight/SAS Webinar last year), and Stavros Asimakopoulous, who is delivering the July 24 (10am ET) Foresight/SAS Webinar on the topic of forecasting with mobile devices.

Picture of Stavros

Stavros Asimakopoulos

In addition to his recent article in Foresight, Stavros has also posted "Forecasting in the Pocket" on the SAS Insights Center.

Post a Comment
  • About this blog

    Michael Gilliland is a longtime business forecasting practitioner and currently Product Marketing Manager for SAS Forecasting. He initiated The Business Forecasting Deal to help expose the seamy underbelly of the forecasting practice, and to provide practical solutions to its most vexing problems.
  • Subscribe to this blog

    Enter your email address:

    Other subscription options

  • Archives