IBF blog series on Forecast Value Added

Calling All Forecasters

Have you tried Forecast Value Added analysis? What did you find out? Are you willing to share your learnings (at least those that can be revealed publicly)?Would you like to be featured in a new blog series on FVA, published by the Institute of Business Forecasting?

The IBF was an early advocate of FVA analysis, from including "FVA" in their Glossary of forecasting terms, to providing a platform for me and other industry forecasters to speak on the subject at IBF events. The blog series will include interviews with forecasting practitioners and provide case studies on the application of the FVA mindset at their organizations.

FVA Stairste Report

FVA Stairstep Report

In the forthcoming December interview, Jonathon Karelse (co-founder of NorthFind Partners) will discuss his use of FVA starting at Yokohama Tire Canada, and now with his consulting clientele.

How to Participate

Interested in sharing your story? Please email me (mike.gilliland@sas.com) to discuss.

For anyone who is publicity shy (due to company PR restrictions, outstanding arrest warrants, debt collectors, etc.), I'm happy to hold your (and your company's) identity confidential.

More News from IBF: 2014 Recognition Award Recipients

While visiting the IBF blog site, don't miss announcement of IBF's 2014 Recognition Awards, to Mark Covas (Lifetime Achievement) and John Gallucci (Forecasting & Planning Excellence).

Mark is Group Director of the Planning Center of Excellence at Coca Cola, and a longtime friend of The BFD.* He was recognized for 20 years of contributions to the IBF as a speaker, journal contributor, and advisory board member. As a practitioner, he has led demand planning initiatives across the globe at Levi-Strauss, Johnson & Johnson, Starbucks, Gillette, Proctor & Gamble, and Georgia-Pacific.

John is Senior Director of Product Planning at Pinnacle foods, and was recognized for his use of lean techniques to improve forecasting process efficiency and effectiveness, and for managing ground up implementations of demand planning and S&OP processes. He is also a frequent speaker at IBF events, and contributor to the Journal of Business Forecasting.

Congratulations to this year's recipients!

-----------------

*I have personally made tens of dollars in royalties from copies of The Business Forecasting Deal Mark purchased for his staff.

 

 

Post a Comment

Lancaster Centre for Forecasting survey

LancasterCan you spare a few minutes to assist researchers at the Lancaster Centre for Forecasting?

We have been commissioned by the European Journal of Operational Research to write a review article on Supply Chain Forecasting. In undertaking this task, we would like to ensure that the topics covered reflect the priorities of the forecasting community as a whole. To that end, we have prepared an (anonymous) questionnaire that aims to identify the most important areas that should be addressed in our paper, both from academic and practitioner perspectives.

To participate in this study, visit the Supply Chain Questionnaire. It will take no more than 5-10 minutes of your time. Please indicate if you are an academic or a practitioner in the last section of the questionnaire. We are working to very tight deadlines, so your timely help and participation is appreciated. Should you have any questions or comments on this work please contact Aris Syntetos. Thank you for your time and co-operation,

A.A. Syntetos, M.Z. Babai, J.E. Boylan, S. Kolassa & K. Nikolopoulos

You will recognize lead author Aris Syntetos, who delivered the October Foresight/SAS Webinar on "Forecasting by Temporal Aggregation." (The recorded webinar is now available for on-demand review.) And Stefan Kolassa co-authored a clever Foresight article "Percentage Errors Can Ruin Your Day" which was discussed in the BFD post Tumbling Dice in 2011.

Post a Comment

Gaming the forecast

Business forecasting is a highly politicized process, subject to the biases and personal agendas of all forecasting process participants. This is why many -- perhaps most -- human adjustments to the forecast fail to make it better. And this is why relative metrics, such as FVA, are so helpful in properly evaluating process performance.

The Games People Play

John Mello

John Mello

In his article "The Impact of Sales Forecast Game Playing on Supply Chains" (Foresight, Spring 2009), John Mello provides a useful taxonomy of seven such games that bias the forecasting process and degrade forecast accuracy.

Enforcing: Maintaining a higher forecast than actually anticipated sales, in order to keep forecasts in line with the organization's sales or financial goals.

When the voice of the marketplace is screaming that you aren't going to hit your sales targets, this should be considered a blessing! Perhaps now, knowing about this in advance, you can do something to address the demand shortfall. Unfortunately, many organizations ignore the screaming, and simply adjust their forecast to meet the objective. It is easy to understand why this happens, because as a practical matter, as long as your forecast still meets your target, your boss won't scream at you!

Filtering: Changing forecasts to reflect the amount of product actually available for sale in a given period.

When you are capacity constrained or otherwise experiencing demand greater than supply, a great way to show 100% forecast accuracy is to forecast sales equal to the expected supply.

Hedging: Overestimating sales in order to secure additional product or production capacity.

No sales person wants to explain unfilled orders to his or her customers. How to avoid this? Simply forecast a lot more than you really expect to need. (Just make sure your heightened forecasts go only to the supply planning folks, not to your own sales management. See next.)

Sandbagging: Underestimating sales in order to set expectations lower than actually anticipated demand.

For your sales management, send in low forecasts. That'll help keep your quotas within reach, and win you that award trip to Hawaii.

Second Guessing: Changing forecasts based on instinct or intuition about future sales.

Sometimes there is no data (new products), or the data may be otherwise inappropriate or insufficient. Sometimes you have to make adjustments based on experience and knowledge of the business. Just make sure to track the performance of such adjustments to make sure they are actually improving accuracy and reducing bias.

Spinning: Manipulating forecasts to obtain the most favorable reaction from individuals or departments in the organization.

Control the data, and control the means of measurement, and you can control the message.

Withholding: Refusing to share current sales information with other members of the organization.

Who wants to get yelled at twice? Why tell your boss you aren't going to hit this quarter's numbers (and get yelled at now), when you can just keep forecasting the hockey stick and only get yelled at quarter end after the sales don't materialize.

How many games are going on at your organization?

Post a Comment

Getting real about uncertainty

Paul Goodwin

Paul Goodwin

In his Spring 2014 article in Foresight, Paul Goodwin addressed the important issue of point vs. probabilistic forecasts.

A point forecast is a single number (e.g., the forecast for item XYZ in December is 635 units). We are all familiar with point forecasts, as these are what's commonly produced (either by the software, or judgment) in our forecasting processes.

The problem is that point forecasts provide no indication of the uncertainty in the number, and uncertainty is an important consideration in business planning. Knowing that the forecasting is 635 +/- 50 units may lead to dramatically different decisions than if the forecast were 635 +/- 500 units.

There are a number of ways to provide a probabilistic view of the forecast. With prediction intervals, the forecast is presented as a range of values with an associated probability. For example, "the 90% prediction interval for item XYZ in December is 635 +/- 500 units." This would indicate much less certainty in the forecast than if the 90% prediction interval were 635 +/- 50 units.

A fan chart provides a visual expansion of a prediction interval over multiple time periods. In Goodwin's example, the darkest band represents the 50% prediction interval, while the wider ranges show the 75% and 95% intervals.

Fan Chart

Fan Chart

A probability density chart can provide even more granular detail on the forecast for a single time period. In this example, the most probably sales are around 500 units, but will almost certainly be between 200 and 1200.

Density Forecast

Density Forecast

Goodwin reviews recent literature on the value of communicating uncertainty, and suggests that it can lead to improved decisions. However, research also showed that if interval forecasts are too wide, they were judged to be uninformative and less credible. So even if 635 +/- 500 is the appropriate width of the 90% prediction interval, decision makers may simply ignore the forecast and doubt the competence of the forecaster who produced it!

Estimating the level of uncertainty may be non-trivial, particularly when forecasts are based on human judgment. Research has repeatedly shown that people produce intervals that are far too narrow. Goodwin cites a 2013 study of financial executives providing 80% prediction intervals for one-year-ahead stock prices. Actual returns fell within the 80% intervals only 36% of the time!

A simple way to address inappropriately narrow intervals was suggested by Makridakis, Hogarth, & Gaba in one of my favorite forecasting-related books, Dance With Chance. They suggest taking your estimated prediction interval and doubling it. (I love a quick and dirty solution to a complex problem.)

Learn more in the next Foresight / SAS Webinar

On December 3, 10:00am ET, Paul Goodwin will present his finding in the next installment of the Foresight / SAS Webinar Series.

This webinar will discuss recent research suggesting that people make better decisions when forecasts contain information on uncertainty. It will demonstrate how to:

  • Estimate uncertainty.
  • Use forecasting uncertainty to your advantage.
  • Present forecasts in ways that are credible, understandable and useful.

Register for "Getting Real About Uncertainty in Forecasting" and watch the 30-second video preview.

 

Post a Comment

SAS Summer Fellowships in Forecasting and Econometrics

The Advanced Analytics division of SAS Research & Development has announced three Summer Fellowships in the areas of Forecasting and Econometrics.

The SAS forecasting fellowships are open to doctoral candidates in mathematics, statistics, computer science, and related graduate departments in the United States. They offer the opportunity to work closely with professional statisticians and computer scientists who develop software used throughout the world.

SAS Short-Life-Cycle Time Series Forecasting Fellowship

The forecasting fellow will contribute to research, develop, and document state-space analysis techniques for panels of short-life-cycle time series. Duties include working with SAS statistical forecasting developers to aid statistical computing research initiatives. The program provides an excellent opportunity to explore software development as a career choice.

SAS Energy Forecasting Fellowship

The forecasting fellow will contribute to research, develop, and document for energy forecasting using high-frequency data. Duties include working with SAS statistical forecasting developers to aid statistical computing research initiatives. The program provides an excellent opportunity to explore software development as a career choice.

SAS Econometrics Fellowship

Open to doctoral candidates in economics, statistics, finance and related graduate programs in the United States, this fellowship offers the opportunity to work closely with professional econometricians and statisticians who develop SAS software used throughout the world. The Econometrics Fellow will contribute to activities such as research, numerical validation and testing, documentation, creating examples of applying SAS econometric software, and assisting with software development work. The specific projects assigned may be adjusted to the skills and interests of the Fellow selected. The program provides an excellent opportunity to explore software development as a career choice.

Apply for these (or other SAS jobs and advanced analytics fellowships) at the SAS Careers website.

Deadline for fellowship applications is January 23, 2015.

Post a Comment

Guest blogger: Len Tashman previews Fall 2014 issue of Foresight

Len TashmanIn 2015 Foresight: The International Journal of Applied Forecasting will celebrate 10 years of publication. From high in his aerie in the Colorado Rockies, here is Editor-in-Chief Len Tashman's preview of the current issue:

In this 35th issue of Foresight, we revisit a topic that always generates lively and entertaining discourse, one where business experience has been far more enlightening than academic research: the question of the proper Role of the Sales Force in Sales Forecasting. Our feature article by Mike Gilliland formulates the key aspects of the issue with three questions: Do salespeople have the ability to accurately predict their customers’ future buying behavior, as many assume they do? Will salespeople provide an honest forecast? And does improving customer-level forecasts improve company performance? Incisive commentaries follow Mike’s piece, contributed by forecast directors at three companies.

As Foresight Editor, I welcome continued discussion from our readers on your experiences and lessons learned at your own organizations.

Paul Goodwin’s Hot New Research column addresses a promising new method for properly representing the uncertainty behind a forecast. Called SPIES (Subjective Probability Interval Estimates), it offers a more intuitive way (than standard statistical approaches) for forecasters to determine and present the probability distribution of their forecast errors. I think you’ll find it provocative.

Our section on Forecasting Support Systems features the article Data-Cube Forecasting for the Forecasting Support System. Noted Russian consultant Igor Gusakov draws on his many years at CPG companies to show how we can achieve the best of what are now two distinct worlds, by synthesizing statistical forecasting capabilities with the OLAP (online analytical processing) tools now commonly used for business intelligence and reporting. Data cubes provide the requisite infrastructure.

Igor is also the subject of our Forecaster in the Field interview.

Our Summer 2014 issue included the first part of a feature section on Forecasting by Aggregation. Two articles there examined “temporal aggregation” opportunities, which deal with the choices of time dimension (daily, weekly, monthly, etc.) for forecasting demands.* Now we present Part Two on Forecasting by Cross-Section Aggregation within a product hierarchy. Giulio Zotteri, Matteo Kalchschmidt, and Nicola Saccani question the usual belief that the level of aggregation for forecasting is specified by the operational requirements of the company. Rather, they argue – quite convincingly – that the best level of aggregation for forecasting should be chosen by the forecasters in an attempt to balance the errors from forecasting with data at too granular a level with those at too aggregate a level.

Rob J. Hyndman and George Athanasopoulos extend the discussion by presenting a way for Optimally Reconciling Forecasts in a Hierarchy. Rarely will the sum of forecasts at a granular level equal the forecast at the group level; hence reconciliation is necessary. The authors argue that traditional reconciliation methods – bottom-up, top-down, and middle-out – fail to make the best use of available data. Their optimal reconciliation is based on a weighted average of forecasts made at all different levels of the hierarchy.

----------------------

*Aris Syntetos will discuss "Forecasting by Temporal Aggregation" in the Foresight/SAS Webinar Series on October 30, 11:00am ET.

Post a Comment

Foresight/SAS webinar October 30 on temporal aggregation

On Thursday, October 30, 11 am ET, Aris Syntetos delivers the next installment of the Foresight/SAS Webinar Series, "Forecasting by Temporal Aggregation." Based on his article in the Summer 2014 issue of Foresight, Aris provided this preview:

When we attempt to improve forecast performance we usually consider new or alternative forecasting methods. However, how about not changing the forecasting methods but rather changing our approach to forecasting altogether (by keeping the forecasting methods the same)?

Have you heard about, or ever considered, forecasting by aggregating demand in lower frequency time units (say weekly into monthly demand)? Aggregating demand will almost always reduce demand uncertainty. It also helps to see the linkages between the output of the forecasting process (forecasts needed to support specific decisions) and input (data available to produce the forecasts).

Best of all, it is highly likely that temporal aggregation will improve forecast performance. Ready to see how?  Join us in this on-demand webinar to understand:

  • How forecasting by temporal aggregation works;
  • The different types of temporal aggregation (overlapping and non-overlapping);
  • The data requirements for supporting forecasting by temporal aggregation;
  • The linkage between decision making requirements and the time periods in which demand is recorded.

Aris Syntetos

Aris is Professor of Operational Research and Operations Management at Cardiff University, UK. His research interests relate to supply chain forecasting and its interface with inventory management and he has advised many commercial organizations in this area. In addition, forecasting, demand classification and stock control algorithms co-developed by him have been implemented or are currently considered for implementation by commercial software packages.

Aris is the supply-chain forecasting editor for Foresight. He serves at the Executive Committee of the International Society for Inventory Research (ISIR) and at the Board of Directors of the International Institute of Forecasters (IIF).

Check out the video preview of the presentation, and register for free.

For more discussion of this topic, and to learn how to apply the approach in SAS Forecast Server software, see the BFD post "Forecasting across a time hierarchy with temporal reconciliation."

Post a Comment

Significant Digits in Business Forecasting

My favorite dog trick is the group hug. This is achieved by taking a dog's bad habit (rising up on strangers who don't really want a 70# dog rising up on them), and "flipping it" into something cute and adorable. It's all a matter of controlling perception, and that is something forecasters are good at in the abuse of significant digits.

Do You Really Need All Those Digits?

Armstrong's Principles of ForecastingIn "The Forecasting Dictionary," Armstrong1 cites a 1990 study by Teigen2 that showed more precise reporting can make a forecast more acceptable, unless such precision seems unreasonable. Thus, management may have more faith in my competence (and the accuracy of my forecasts) if I project revenues of $147,000, $263,000, and $72,500 for my three products (rather than $100,000, $300,000, and $70,000.). However, they'll likely see through my ruse if I forecast $147,362.27, $262,951.85, and $72,515.03. (Not even the most naïve executive believes a forecast can be that that close to being right!)

So how many significant digits should we use in reporting our forecasts? More digits (as long as there aren't too many!) can give a false sense of precision, implying we have more confidence in a forecast when such confidence isn't really merited. Fewer digits are just another way of saying we don't have much confidence at all in our forecast, implying a wide prediction interval in which the actual value will fall.

Armstrong (p. 810) suggests "A good rule of thumb is to use three significant digits unless the measures do have greater precision and the added precision is needed by the decision maker." I suspect there are very few circumstances in which more than three digits are needed. You are probably safe to configure your forecasting software to show only three significant digits.

Think of it this way: Suppose you can create perfect forecasts (100% accurate), but are limited to just three significant digits. What is the worst case forecast error?

If the perfect forecast is 1005, but you are limited to three significant digits, then the forecast would be 1010 or 1000 (depending on whether you round up or down). When the actual turns out to be 1005 (as predicted with the perfect forecast), then your forecast error is 0.5%.

If the perfect forecast is 9995, then the three significant digit forecast is 10,000 or 9990 (depending on whether you round up or down). When the actual turns out to be 9995 (as we predicted with the perfect forecast), your forecast error is 0.05%.

Limiting the forecast to three significant digits creates at most a 0.5% forecast error (and often considerably less than that). What forecaster wouldn't sell his soul for a 0.5% error? Given that typical business forecast errors are in the 20-50% range, another half a percent (worse case!) is nothing.

To conclude: In the continuing pursuit to streamline the forecasting process, and make it appear less ridiculous, get rid of all those extraneous decimal points and digits, and limit your forecasts to just three.

----------------

1In Armstrong, J.S. (2001), Principles of Forecasting. Kluwer Academic Publishers, pp. 761-824.

2Teigen, K.H. (1990), "To be convincing or to be right: A question of preciseness," in K.J. Gilhooly, M.T.G. Keane, R.H. Logie & G. Erdos (eds.), Lines of Thinking. Chichester: John Wiley, pp. 299-313.

 

 

 

Post a Comment

5 steps to setting forecasting performance objectives (Part 2)

And now for the five steps:

1. Ignore industry benchmarks, past performance, arbitrary objectives, and what management "needs" your accuracy to be.

Published benchmarks of industry forecasting performance are not relevant. See this prior post The perils of forecasting benchmarks for explanation.

Previous forecasting performance may be interesting to know, but not relevant to setting next year's objectives. We have no guarantee that next year's data will be equally forecastable. For example, what if a retailer switches a product from everyday low pricing (which generated stable demand) to high-low pricing (where alternating on and off promotion will generate highly volatile demand). You cannot expect to forecast the volatile demand as accurately as the stable demand.

And of course, arbitrary objectives (like "All MAPEs < 20%") or what management "feels it needs" to run a profitable business, are inappropriate.

2. Consider forecastability...but realize you don't know what it will be next year.

Forecast accuracy objectives should be set based on the "forecastability" of what you are trying to forecast. If something has smooth and stable behavior, then we ought to be able to forecast it quite accurately. If it has wild, volatile, erratic behavior, then we can't have such lofty accuracy expectations.

While it is easy to look back on history and see which patterns were more or less forecastable, we don't have that knowledge of the future. We don't know, in advance, whether product X or product Y will prove to be more forecastable, so we can't set a specific accuracy target for them.

3. Do no worse than the naïve model.

Every forecaster should be required to take the oath, "First, do no harm." Doing harm is doing something that makes the results worse than doing nothing. And in forecasting, doing nothing is utilizing the naïve model (i.e., random walk, aka "no change"model) where your forecast of the future is your most recent "actual" value. (So if you sold 50 last week, your forecast for future weeks is 50. If you actually sell 60 this week, your forecast for future weeks becomes 60. Etc.)

You don't need fancy systems or people or processes to generate a naïve forecast -- it is essentially free. So the most basic (and most pathetic) minimum performance requirement for any forecaster is to do no worse than the naïve forecast.

4. Irritate management by not committing to specific numerical forecast accuracy objectives.

It is generally agreed that a forecasting process should do no worse than the naïve model. Yet in real life, perhaps 50% of business forecasts fail to achieve this embarrassingly low threshold. (See Steve Morlidge's recent articles in Foresight, which have been covered in previous BFD blog posts). Since we do not yet know how well the naïve model will forecast in 2015, we cannot set a specific numerical accuracy objective. So the 2015 objective can only be "Do no worse than the naïve model."

If you are a forecaster, it can be reckless and career threatening to commit to a more specific objective.

5. Track performance over time.

Once we are into 2015 and the "actuals" start rolling in each period, we can compare our forecasting performance to the performance of the naïve model. Of course you cannot jump to any conclusions with just a few periods of data, but over time you may be able to discern whether you, or the naïve model, is performing better.

Always start your analysis with the null hypotheses, H0: There is no difference in performance. Until there is sufficient data to reject H0, you cannot claim to be doing better (or worse) than the naïve model.

Promotional Discount for APICS2014APICS2014 logo

If you'd like to hear more about setting performance objectives, and talk in person about your issues, please join me at APICS2014 in New Orleans (October 19-22). As a conference speaker, I've been authorized to provide an RSVP code NOLAF&F for you to receive a $100 discount off your registration at www.apicsconference.org.

My presentation "What Management Must Know About Forecasting" is Sunday October 19, 8:00-9:15 am -- just in time for you to be staggering in from Bourbon St.

 

Post a Comment

5 steps to setting forecasting performance objectives (Part 1)

It is after Labor Day in the US, meaning we must no longer wear white shoes, skirts, jackets, or trousers. But even if you are now going sans-culotte, it is time to begin thinking about organizational performance objectives for 2015.

Setting forecasting performance objectives is one way for management to shine...or to demonstrate an abysmal lack of understanding of the forecasting problem. Inappropriate performance objectives can provide undue rewards (if they are too easy to achieve), or can serve to demoralize employees and encourage them to cheat (when they are too difficult or impossible). For example:

Suppose you have the peculiar job of forecasting Heads or Tails in the daily toss of a fair coin. While you sometimes get on a hot streak and forecast correctly for a few days in a row, you also hit cold streaks, where you are wrong on several consecutive days. But overall, over the course of a long career, you forecast correctly just about 50% of the time.

If your manager had been satisfied with 40% forecast accuracy, then you would have enjoyed many years of excellent bonuses for doing nothing. Because of the nature of the process -- the tossing of a fair coin -- it took no skill to achieve 50% accuracy. (By one definition, if doing something requires "skill" then you can purposely do poorly at it. Since you could not purposely call the tossing of a fair coin only 40% of the time, performance is not due to skill but to luck. See Michael J. Mauboussin's The Success Equation for more thorough discussion of skill vs. luck.)

If you get a new manager who sets your goal at 60% accuracy, then you either need to find a new job or figure out how to cheat. Because again, by the nature of the process of tossing a fair coin, your long term forecasting performance can be nothing other than 50%. Achieving 60% accuracy is impossible.

So just how do you set appropriate objectives for forecasting? We'll learn the five steps in Part 2.

Reminder: Foresight Practitioner Conference (Oct. 8-9, Columbus, OH)

This year's topic is "From S&OP to Demand-Supply Integration: Collaboration Across the Supply Chain" and features an agenda of experienced practitioners and noted authors. The conference provides a unique vendor-free experience, so you don't have to worry about me (or similar hucksters) being there harassing you from an exhibitor booth.

Register between September 2-9 with code 2014Foresight and receive a $200 discount.

The Foresight and Ohio State University hosts provide an exceptional 1.5 day professional-development opportunity.

See the program and speakers at http://forecasters.org/foresight/sop-2014-conference/.

Conference Logo

 

Post a Comment
  • About this blog

    Michael Gilliland is a longtime business forecasting practitioner and currently Product Marketing Manager for SAS Forecasting. He initiated The Business Forecasting Deal to help expose the seamy underbelly of the forecasting practice, and to provide practical solutions to its most vexing problems.
  • Subscribe to this blog

    Enter your email address:

    Other subscription options

  • Archives