Significant Digits in Business Forecasting

My favorite dog trick is the group hug. This is achieved by taking a dog's bad habit (rising up on strangers who don't really want a 70# dog rising up on them), and "flipping it" into something cute and adorable. It's all a matter of controlling perception, and that is something forecasters are good at in the abuse of significant digits.

Do You Really Need All Those Digits?

Armstrong's Principles of ForecastingIn "The Forecasting Dictionary," Armstrong1 cites a 1990 study by Teigen2 that showed more precise reporting can make a forecast more acceptable, unless such precision seems unreasonable. Thus, management may have more faith in my competence (and the accuracy of my forecasts) if I project revenues of $147,000, $263,000, and $72,500 for my three products (rather than $100,000, $300,000, and $70,000.). However, they'll likely see through my ruse if I forecast $147,362.27, $262,951.85, and $72,515.03. (Not even the most naïve executive believes a forecast can be that that close to being right!)

So how many significant digits should we use in reporting our forecasts? More digits (as long as there aren't too many!) can give a false sense of precision, implying we have more confidence in a forecast when such confidence isn't really merited. Fewer digits are just another way of saying we don't have much confidence at all in our forecast, implying a wide prediction interval in which the actual value will fall.

Armstrong (p. 810) suggests "A good rule of thumb is to use three significant digits unless the measures do have greater precision and the added precision is needed by the decision maker." I suspect there are very few circumstances in which more than three digits are needed. You are probably safe to configure your forecasting software to show only three significant digits.

Think of it this way: Suppose you can create perfect forecasts (100% accurate), but are limited to just three significant digits. What is the worst case forecast error?

If the perfect forecast is 1005, but you are limited to three significant digits, then the forecast would be 1010 or 1000 (depending on whether you round up or down). When the actual turns out to be 1005 (as predicted with the perfect forecast), then your forecast error is 0.5%.

If the perfect forecast is 9995, then the three significant digit forecast is 10,000 or 9990 (depending on whether you round up or down). When the actual turns out to be 9995 (as we predicted with the perfect forecast), your forecast error is 0.05%.

Limiting the forecast to three significant digits creates at most a 0.5% forecast error (and often considerably less than that). What forecaster wouldn't sell his soul for a 0.5% error? Given that typical business forecast errors are in the 20-50% range, another half a percent (worse case!) is nothing.

To conclude: In the continuing pursuit to streamline the forecasting process, and make it appear less ridiculous, get rid of all those extraneous decimal points and digits, and limit your forecasts to just three.

----------------

1In Armstrong, J.S. (2001), Principles of Forecasting. Kluwer Academic Publishers, pp. 761-824.

2Teigen, K.H. (1990), "To be convincing or to be right: A question of preciseness," in K.J. Gilhooly, M.T.G. Keane, R.H. Logie & G. Erdos (eds.), Lines of Thinking. Chichester: John Wiley, pp. 299-313.

 

 

 

Post a Comment

5 steps to setting forecasting performance objectives (Part 2)

And now for the five steps:

1. Ignore industry benchmarks, past performance, arbitrary objectives, and what management "needs" your accuracy to be.

Published benchmarks of industry forecasting performance are not relevant. See this prior post The perils of forecasting benchmarks for explanation.

Previous forecasting performance may be interesting to know, but not relevant to setting next year's objectives. We have no guarantee that next year's data will be equally forecastable. For example, what if a retailer switches a product from everyday low pricing (which generated stable demand) to high-low pricing (where alternating on and off promotion will generate highly volatile demand). You cannot expect to forecast the volatile demand as accurately as the stable demand.

And of course, arbitrary objectives (like "All MAPEs < 20%") or what management "feels it needs" to run a profitable business, are inappropriate.

2. Consider forecastability...but realize you don't know what it will be next year.

Forecast accuracy objectives should be set based on the "forecastability" of what you are trying to forecast. If something has smooth and stable behavior, then we ought to be able to forecast it quite accurately. If it has wild, volatile, erratic behavior, then we can't have such lofty accuracy expectations.

While it is easy to look back on history and see which patterns were more or less forecastable, we don't have that knowledge of the future. We don't know, in advance, whether product X or product Y will prove to be more forecastable, so we can't set a specific accuracy target for them.

3. Do no worse than the naïve model.

Every forecaster should be required to take the oath, "First, do no harm." Doing harm is doing something that makes the results worse than doing nothing. And in forecasting, doing nothing is utilizing the naïve model (i.e., random walk, aka "no change"model) where your forecast of the future is your most recent "actual" value. (So if you sold 50 last week, your forecast for future weeks is 50. If you actually sell 60 this week, your forecast for future weeks becomes 60. Etc.)

You don't need fancy systems or people or processes to generate a naïve forecast -- it is essentially free. So the most basic (and most pathetic) minimum performance requirement for any forecaster is to do no worse than the naïve forecast.

4. Irritate management by not committing to specific numerical forecast accuracy objectives.

It is generally agreed that a forecasting process should do no worse than the naïve model. Yet in real life, perhaps 50% of business forecasts fail to achieve this embarrassingly low threshold. (See Steve Morlidge's recent articles in Foresight, which have been covered in previous BFD blog posts). Since we do not yet know how well the naïve model will forecast in 2015, we cannot set a specific numerical accuracy objective. So the 2015 objective can only be "Do no worse than the naïve model."

If you are a forecaster, it can be reckless and career threatening to commit to a more specific objective.

5. Track performance over time.

Once we are into 2015 and the "actuals" start rolling in each period, we can compare our forecasting performance to the performance of the naïve model. Of course you cannot jump to any conclusions with just a few periods of data, but over time you may be able to discern whether you, or the naïve model, is performing better.

Always start your analysis with the null hypotheses, H0: There is no difference in performance. Until there is sufficient data to reject H0, you cannot claim to be doing better (or worse) than the naïve model.

Promotional Discount for APICS2014APICS2014 logo

If you'd like to hear more about setting performance objectives, and talk in person about your issues, please join me at APICS2014 in New Orleans (October 19-22). As a conference speaker, I've been authorized to provide an RSVP code NOLAF&F for you to receive a $100 discount off your registration at www.apicsconference.org.

My presentation "What Management Must Know About Forecasting" is Sunday October 19, 8:00-9:15 am -- just in time for you to be staggering in from Bourbon St.

 

Post a Comment

5 steps to setting forecasting performance objectives (Part 1)

It is after Labor Day in the US, meaning we must no longer wear white shoes, skirts, jackets, or trousers. But even if you are now going sans-culotte, it is time to begin thinking about organizational performance objectives for 2015.

Setting forecasting performance objectives is one way for management to shine...or to demonstrate an abysmal lack of understanding of the forecasting problem. Inappropriate performance objectives can provide undue rewards (if they are too easy to achieve), or can serve to demoralize employees and encourage them to cheat (when they are too difficult or impossible). For example:

Suppose you have the peculiar job of forecasting Heads or Tails in the daily toss of a fair coin. While you sometimes get on a hot streak and forecast correctly for a few days in a row, you also hit cold streaks, where you are wrong on several consecutive days. But overall, over the course of a long career, you forecast correctly just about 50% of the time.

If your manager had been satisfied with 40% forecast accuracy, then you would have enjoyed many years of excellent bonuses for doing nothing. Because of the nature of the process -- the tossing of a fair coin -- it took no skill to achieve 50% accuracy. (By one definition, if doing something requires "skill" then you can purposely do poorly at it. Since you could not purposely call the tossing of a fair coin only 40% of the time, performance is not due to skill but to luck. See Michael J. Mauboussin's The Success Equation for more thorough discussion of skill vs. luck.)

If you get a new manager who sets your goal at 60% accuracy, then you either need to find a new job or figure out how to cheat. Because again, by the nature of the process of tossing a fair coin, your long term forecasting performance can be nothing other than 50%. Achieving 60% accuracy is impossible.

So just how do you set appropriate objectives for forecasting? We'll learn the five steps in Part 2.

Reminder: Foresight Practitioner Conference (Oct. 8-9, Columbus, OH)

This year's topic is "From S&OP to Demand-Supply Integration: Collaboration Across the Supply Chain" and features an agenda of experienced practitioners and noted authors. The conference provides a unique vendor-free experience, so you don't have to worry about me (or similar hucksters) being there harassing you from an exhibitor booth.

Register between September 2-9 with code 2014Foresight and receive a $200 discount.

The Foresight and Ohio State University hosts provide an exceptional 1.5 day professional-development opportunity.

See the program and speakers at http://forecasters.org/foresight/sop-2014-conference/.

Conference Logo

 

Post a Comment

Reminder: September 30 deadline for IIF/SAS Research Grants

I wanted to pass along this reminder from Pam Stroud at the International Institute of Forecasters:

Grant to Promote Research on Forecasting

Pam Stroud

Pam Stroud

For the twelfth year, the IIF, in collaboration with SAS®, is proud to announce financial support for research on how to improve forecasting methods and business forecasting practice. The award for this year will be (2) two $5,000 grants. The application deadline for the grant year 2014-2015 is September 30, 2014.  For more information visit the IIF website, http://forecasters.org/activities/funding-awards/grants-and-research-awards/.

Industry forecasting practitioners should also consider applying for these grants. As a practitioner you have access to a wealth of your own company data -- the kind of data that academics have difficulty getting access to. Analysis of volatility, FVA, and the real-life performance of particular models, forecasting systems, and forecasting process would all be of interest.

Post a Comment

Forecasting and supply chain blogs

In the summer heat, when The BFD alone isn't quite quenching your thirst for forecasting know-how, here are several other sources:

 

Steve Morlidge

Steve Morlidge

CatchBlog -- by Steve Morlidge of CatchBull

From his 2010 book Future Ready (co-authored with Steve Player), to his recent 4-part series in Foresight dealing with the "avoidability" of forecast error, Steve is delivering cutting edge content in the area of forecast performance management. He also contributed a Foresight/SAS Webinar available for on-demand review.

Eric Stellwagen

Eric Stellwagen

The Forecast Pro -- Eric Stellwagen of Business Forecast Systems

Eric delivers great practical tutorials on various forecasting topics through his blog, and also in conference presentations and articles (including 2 chapters in the Foresight Forecasting Methods Tutorials compilation).

IBF Blog -- various industry contributors

Good source for previews to upcoming Institute of Business Forecasting events, contributed by conference speakers. Also event recaps by conference attendees, and previews of the Journal of Business Forecasting.

Shaun Snapp

Shaun Snapp

SCM Focus Blogs -- by Shaun Snapp of SCM Focus

Shaun is a prolific writer of blogs (he maintains several on his website) and books (most recently Rethinking Enterprise Software Risk), and provides a sharp critical eye on IT, ERP, supply chain, and forecasting issues.

Lora Cecere

Lora Cecere

Supply Chain Shaman -- by Lora Cecere of Supply Chain Insights

Lora is a long time industry analyst and, like Shaun, is another prolific writer covering a wide range of topics. Her recent ebook The Supply Chain Shaman's Journal: A Focused Look at Demand includes some nice mention of FVA.

 

 

 

Post a Comment

IIF/SAS grants to support research on forecasting

IIF LogoAgain this year (for the 12th time), SAS Research & Development has funded two $5,000 research grants, to be awarded by the International Institute of Forecasters.

  • Criteria for award of the grant will include likely impact on forecasting methods and business applications.
  • Consideration will be given to new researchers in the field and whether supplementary funding is possible.
  • The application must include: Project description, letter of support, C.V., and budget for the project.

Applications should be submitted to the IIF office by September 30, 2014 in electronic format to:

Pamela Stroud, Business Director, forecasters@forecasters.org

For more information: forecasters.org/grants-and-research-awards

See Pamela's IIF blog for information on 2013 grant winners Jeffrey Stonebraker (North Carolina State University, USA) and Yongchen (Herbert) Zhao (University at Albany, USA). The more information link also provides details on past grant recipients, including such forecasting royalty as Robert Fildes, Dave Dickey, Sven Crone, Paul Goodwin (who delivered the inaugural Foresight/SAS Webinar last year), and Stavros Asimakopoulous, who is delivering the July 24 (10am ET) Foresight/SAS Webinar on the topic of forecasting with mobile devices.

Picture of Stavros

Stavros Asimakopoulos

In addition to his recent article in Foresight, Stavros has also posted "Forecasting in the Pocket" on the SAS Insights Center.

Post a Comment

Foresight/SAS webinar: forecasting on mobile devices

Picture of Stavros

Stavros Asimakopoulos

On July 24, 10am ET, Stavros Asimakopoulos delivers the next installment of the Foresight/SAS Webinar Series, "Forecasting 'In the Pocket': Mobile Devices Can Improve Collaboration." Based on his article (co-authored with George Boretos and Constantinos Mourlas) in the Winter 2014 issue of Foresight, Stavros will discuss how smartphones, tablet computers and other mobile devices mean new opportunities for communication and collaboration on business forecasts.

Stavros currently serves as Foresight Editor for Forecasting Support Systems, and Lecturer in User Experience (UX) at the University of Athens, Greece. Formerly a Director of UX at Experience Dynamics - USA, he has over 10 years of experience in user research, forecasting software-design improvements and UX strategy in business forecasting systems. Stavros is an active member of the British Human-Computer Interaction Group and regularly presents his work at the International Symposium on Forecasting. Earlier this week at ISF in Rotterdam, Stavros moderated a panel on forecasting user experience that included Ryan Chipley, Sr. User Experience Designer at SAS.

There is a preview of the presentation available on YouTube, and webinar registration is free.

 

Post a Comment

Forecasting across a time hierarchy with temporal reconciliation

The BFD blog probably won't make you the best forecaster you can be. There are plenty of books and articles and world class forecasting researchers (many of them meeting next week in Rotterdam at the International Symposium on Forecasting), that can teach you the most sophisticated techniques to squeeze every last percent of accuracy out of your forecasts. Think of these as your offense in the game of forecasting.

The purpose of The BFD, in this thankless game, is to play defense. To identify the stupid stuff, the bad practices, the things we do that sound like good ideas -- but may just make the forecast worse. In short, to protect you from becoming the worst forecaster you can be.

But sometimes the best defense is a good offense, so let's examine a method known as "temporal reconciliation" that has been getting some recent attention.

Hierarchical Forecasting

Every business forecaster is familiar with forecasting hierarchies. For example, for products:

ITEM ==> PRODUCT GROUP ==> BRAND ==> TOTAL

For manufacturing and distribution:

CUSTOMER ==> DISTRIBUTION CENTER ==> MANUFACTURER ==> TOTAL

For sales management:

CUSTOMER ==> TERRITORY ==> SALES REP ==> SALES MANAGER ==> TOTAL

Some forecasting software packages maintain the hierarchies separately, while others (like SAS Forecast Server), combine the product and geographical / organizational levels into a single hierarchy, such as:

ITEM ==> PRODUCT GROUP ==> SALES REGION ==> DISTRIBUTION CENTER ==> TOTAL

Forecasts at the most granular level of detail (e.g., ITEM) can then be aggregated to any higher level. SAS Forecast Server automatically builds customized models and generates forecasts for each node, at each level of the hierarchy. Using "hierarchical reconciliation" you choose the forecasts at one level and have them cascade down and up the hierarchy, apportioning the numbers so that the whole hierarchy adds up properly.

The value of reconciliation is that it allows you to take advantage of information at each level. For example, there may be consistent seasonal trends that are observable at higher levels, but are difficult to discern at the most granular levels where the data is very noisy. You'll often find that models at the most granular level are flat line -- simply forecasting the average -- since there is too much randomness to detect a seasonal signal. But by applying a seasonal model at a higher level, the seasonality can be pushed down to the granular level through reconciliation. This can give you a better forecast at the granular level.

Forecasting Across a Time Hierarchy

"Temporal reconciliation" is a less familiar approach, utilizing forecasts created in different time buckets. It has been getting a lot of attention lately, with two articles in the forthcoming issue of Foresight. Per Editor Len Tashman's preview:

The two articles in this section address temporal aggregation. In his own piece, Forecasting by Temporal Aggregation, Aris provides a primer on the key issues in deciding which data frequencies to forecast and how to aggregate/disaggregate from these to meet organizational requirements. Then, in Improving Forecasting via Multiple Temporal Aggregation, Fotios Petropoulos and Nikolaos Kourentzes offer a provocative new way to achieve reconciliation of daily through yearly forecasts while increasing the accuracy with which each frequency is forecast.

Petropoulos and Kourentzes provide a summary version in the Lancaster Centre for Forecasting's latest newsletter, with the assertion, "The key idea is that by combining various views/aggregation levels of the series we augment different features of it, helping forecasting models to better capture its structure. The same simple trick can be used to reduce or even remove intermittency in slow moving items making them easier to forecast."

If you are ready to try this, note that SAS Forecast Server has included temporal reconciliation since version 4.1 (released in July 2011). As explained in The BFD at its release:

Temporal Reconciliation: A powerful new tool for combining and reconciling forecasts generated at different time frequencies (e.g. hourly, daily, weekly).  Consider demand forecasting in the electric utility industry, where consumption varies by hour within the day (less demand in the middle of the night), by day within the week (business days compared to weekends), and by week within the year (higher peak demand in summer to drive air conditioners).  With the SAS Forecast Server Procedures engine inside SAS Forecast Server 4.1, you can model the behavior at each time frequency, and then combine the results to incorporate the various seasonal cycles.

 

Post a Comment

Weather forecasts: deterministic, probabilistic or both?

In 1965's Subterranean Homesick Blues, Bob Dylan taught us:

You don't need a weatherman / To know which way the wind blows

In 1972's You Don't Mess Around with Jim, Jim Croce taught us:

You don't spit into the wind

By combining these two teachings, one can logically conclude that:

You don't need a weatherman to tell you where not to spit

But the direction of expectoration is the least of my current worries. What I really want to know is, WTH does it mean when the weatherman says there is a 70% chance of rain?

Greg Fishel, WRAL-TV Chief Meteorologist to the Rescue

Greg Fishel

Greg Fishel & Fan Club

I am pleased to announce that Greg Fishel, local celebrity and weather forecaster extrordinaire, will be speaking at the Analytics2014 conference at the Bellagio in Las Vegas (October 20-21). Greg (shown here with several groupies during a recent visit to the SAS campus) will discuss "Weather Forecasts: Deterministic, Probabilistic or Both?" Here is his abstract:

The use of probabilities in weather forecasts has always been problematic, in that there are as many interpretations of how to use probabilistic forecasts as there are interpreters. However, deterministic forecasts carry their own set of baggage, in that they often overpromise and underdeliver when it comes to forecast information critical for planning by various users. In recent years, an ensemble approach to weather forecasting has been adopted by many in the field of meteorology. This presentation will explore the various ways in which this ensemble technique is being used, and discuss the pros and cons of an across-the-board implementation of this forecast philosophy with the general public.

Please join us in Las Vegas, where I hope Greg will answer that other troubling weather question, can I drive a convertible fast enough in a rainstorm to not get wet. (Note, Mythbusters determined this to be "Plausible but not recommended.")

 

 

Post a Comment

Guest blogger: Len Tashman previews Summer 2014 issue of Foresight

 

Foresight editor Len Tashman

Len Tashman

Our tradition from Foresight’s birth in 2005 has been to feature a particular topic of interest and value to practicing forecasters. These feature sections have covered a wide range of areas: the politics of forecasting, how and when to judgmentally adjust statistical forecasts, forecasting support systems, why we should (or shouldn’t) place our trust in the forecasts, and guiding principles for managing the forecasting process, to name a selection.

This 34th issue of Foresight presents the first of a two-part feature section on forecasting by aggregation, the use of product aggregates to generate forecasts and then reconcile the forecasts across the product hierarchy. Aggregation is commonly done for product and geographic hierarchies but less frequently implemented for temporal hierarchies, which depict each product’s history in data of different frequencies: daily, weekly, monthly, and so forth.

The entire section has been organized by Aris Syntetos, Foresight’s Supply Chain Forecasting Editor, who is interviewed as this issue's Forecaster in the Field.  In his introduction to the feature section, Aris writes that forecasting by aggregation can provide dramatic benefits to an organization, and that its merits need to be more fully recognized and supported by commercially available forecasting software.

The two articles in this section address temporal aggregation. In his own piece, Forecasting by Temporal Aggregation, Aris provides a primer on the key issues in deciding which data frequencies to forecast and how to aggregate/disaggregate from these to meet organizational requirements. Then, in Improving Forecasting via Multiple Temporal Aggregation, Fotios Petropoulos and Nikolaos Kourentzes offer a provocative new way to achieve reconciliation of daily through yearly forecasts while increasing the accuracy with which each frequency is forecast.

Also in this issue, we review two new books that will interest those who take the long-range view of forecasting, both of  its history and professionally: Fortune Tellers: The Story of America’s First Economic Forecasters by Walter A. Friedman, and In 100 Years: Leading Economists Predict the Future, edited by Ignacio Palacios-Huerta.

The first book, writes reviewer Ira Sohn, Foresight’s Editor for Long-Range Forecasting, provides a “historical overview of the pioneers of forecasting, of the economic environments in which they worked, and of the tool sets and methodologies they used to generate their forecasts.” These trailblazers include familiar names and some not so well known: Roger Babson, John Moody, Irving Fisher, C. J. Bullock, Warren Persons, Wesley Clair Mitchell, and, surprisingly, Herbert Hoover. Why these? According to Friedman, they were the first to envision the possibility that economic forecasting could be a field, or even a profession; that the systematic study of a vast range of statistical data could yield insights into future business conditions; and that a market existed in business and government for weekly economic forecasts.

For the second book, editor Palacios-Huerta invited some of the “best brains in economics” – three of them already awarded Nobel prizes – to speculate on the state of the world and material well-being in 2113. Here they address some big issues: how will population, climate, social and economic inequality, strife, work, and education change in the next 100 years, and what are our prospects for being better off then than we are now?

Our section on Forecasting Principles and Methods turns to Walt Disney Resorts’ revenue managers McKay Curtis and Frederick Zahrn for a primer on Forecasting for Revenue Management. The essential objective, they write, is to adjust prices and product/service availability to maximize firm revenue from a given set of resources. We see revenue management in operation most personally when we watch airline ticket-price movements and need to know hotel room availability. It is the forecasts that drive these systems, and the authors show how they are used in revenue management.

Our concluding article is the fourth and final piece in Steve Morlidge’s series of discussions on forecast quality, a term Steve defines as forecast accuracy in relation to the accuracy of naïve forecasts, which he measures by the Relative Absolute Error (RAE). The previous articles demonstrated realistic boundaries to the RAE: an RAE above 1.0 is a cause for change since the method employed is no more accurate than a naïve – no change from last period – forecast, while an RAE below 0.5 occurred very rarely and thus represented a practical lower limit to forecast error.

Steve now deals with the natural follow-up question of how we should be Using Relative Error Metrics to Improve Forecast Quality in the Supply Chain. What actions should the business take in response to particular values for the RAE? His protocols will help identify those items that form a “sweet spot” for efforts to upgrade forecasting performance.

Meet us in Columbus, Ohio in October

The lively learning environment, easy camaraderie among the presenters and delegates, and very practical program made last year's Foresight Practitioner Conference at  Ohio State University’s Fisher College of Business a great success. We're looking forward to this year's event, From S&OP to Demand-Supply Integration: Collaboration Across the Supply Chain. The FPC offers a unique blend of practitioner experience and scholarly research within a vendor-free environment — I hope we'll see you there!

Find more information on the program at www.forecasters.org/foresight/sop-2014-conference/. Registration is discounted $100 (to $1295) for Foresight subscribers and IIF members (use registration code 2014FORESIGHT when you register).

Post a Comment
  • About this blog

    Michael Gilliland is a longtime business forecasting practitioner and currently Product Marketing Manager for SAS Forecasting. He initiated The Business Forecasting Deal to help expose the seamy underbelly of the forecasting practice, and to provide practical solutions to its most vexing problems.
  • Subscribe to this blog

    Enter your email address:

    Other subscription options

  • Archives