Using information criteria to select forecasting model (Part 2)

Let's continue now to Nikolaos Kourentzes' blog post on How to choose a forecast for your time series.

Using a Validation Sample

Nikos first discusses the fairly common approach of using a validation or "hold out" sample.

The idea is to build your model based on a subset of the historical data (the "fitting" set), and then test its forecasting performance over the historical data that has been held out (the "validation" set). For example, if you have four years of monthly sales data, you could build models using the oldest 36 months, and then test their performance over the most recent 12 months.

You might recognize this approach from the recent BFD blog Rob Hyndman on measuring forecast accuracy. Hyndman uses the terminology (which may be more familiar to data miners) of "training data" and "test data." He suggested that when there is enough history, about 20% of the observations (the most recent history) should be held out for the test data. The test data should be at least as large as the forecasting horizon (so hold out 12 months if you need to forecast one year into the future).

Hyndman uses this diagram to show the history divided into training and test data. The unknown future (that we are trying to forecast) is to the right of the arrow:

Training data and Test dataNikos works though a good example of this approach comparing an exponential smoothing and an ARIMA model. Each model is built using only the "fitting" set (the oldest history), and generates forecasts for the time periods covered in the validation set.

How accurately the competing models forecast the validation set can help you decide which type of model is more appropriate for the time series. You could then use the "winning" model to forecast the unknown future periods. An obvious drawback is that your forecasting model has only used the older fitting data, essentially sacrificing the more recent data in the validation set.

Another alternative, once you've use the above approach to determine which type of model is best, is to rebuild the same type of model based on the full history (fitting + validation sets).

Using Information Criteria

Information criteria (IC) provide an entirely different way of evaluating model performance. It is well recognized that more complex models can be constructed to fit the history better. In fact, it is always possible to create a model that fits a time series perfectly. But our job as forecasters isn't to fit models to history -- it is to generate reasonably good forecasts of the future.

As we saw in the previous BFD post, overly complex models may "overfit" the history, and actually generate very inappropriate forecasts.

Cubic model

An example of overfitting the model to history

Measures like Akaike's Information Criterion (AIC) help us avoid overfitting by balancing goodness of fit with model complexity (penalizing more complex models). Nikos provides a thorough example showing how the AIC works. As he points out, "The model with the smallest AIC will be the model that fits best to the data, with the least complexity and therefore less chance of overfitting."

Another benefit of the AIC is that it uses the full time series history, there is no need for separate fitting and validation sets. But a drawback is that AIC cannot be used to compare models from different model families (so you could not do the exponential smoothing vs. ARIMA comparison shown above). There is plenty of literature on the AIC so you can find more details before employing it.

Combining Forecasts

Nikos ends his post with a great piece of advice on combining models. Instead of struggling to pick a single best model for a given time series, why not just take an average of several "appropriate" models, and use that as your forecast?

There is growing evidence that combining forecasts can be effective at reducing forecast errors, while being less sensitive to the limitations of a single model. SAS® Forecast Server is one of the few commercial packages that readily allows you to combine forecasts from multiple models.

Post a Comment

Using information criteria to select forecasting model (Part 1)

Photo of Nikolaos Kourentzes

Dr. Nikolaos Kourentzes Forecasting Professor with great hair

Nikolaos Kourentzes is Associate Professor at Lancaster University, and a member of the Lancaster Centre for Forecasting. In addition to having a great head of hair for a forecasting professor, Nikos has a great head for explaining fundamental forecasting concepts.

In his recent blog post on How to choose a forecast for your time series, Nikos discusses the familiar validation (or "hold out") sample method, and the less familiar approach of using information criteria (IC). Using an IC helps you avoid the common problem of "overfitting" a model to the history. We can see what this means by a quick review of the time series forecasting problem.

The Time Series Forecasting Problem

A time series is a sequence of data points taken across equally spaced intervals. In business forecasting, we usually deal with things like weekly unit sales, or monthly revenue. Given the historical time series data, the forecaster's job is to predict the values for future data points. For example, you may have three years of weekly sales history for product XYZ, and need to forecast the next 52 weeks of sales for use in production, inventory, and revenue planning.

There is much discussion on the proper way to select a forecasting model. It may seem obvious to just choose a model that best fits the historical data. But is that such a smart thing?

Unfortunately, having a great fit to history is no guarantee that a model will be any good at forecasting the future. Let's look at a simple example to illustrate the point:

Suppose you have four weeks of sales: 5, 6, 4, and 7 units. What model should you choose to forecast the next several weeks? A simple way to start with with an average of the 4 historical data points:

Weekly mean model

Model 1: Weekly Mean

We see this model forecasts 5.5 units per week into the future, and there is an 18% error in the fit to history. Can we do better? Let's try a linear regression to capture any trend in the data:

Linear regression model

Model 2: Linear Regression

This model forecasts increasing sales, and the historical fit error has been reduced to 15%. But can we do better? Let's now try a quadratic model:

Quadratic model

Model 3: Quadratic

The quadratic model cuts the historical fit error in half to 8%, and it is also showing a big increase in future sales. But we can do even better with a cubic model:

Cubic model

Model 4: Cubic

It is always possible to find a model that fits your time series history perfectly. We have done so here with a cubic model. But even though the historical fit is perfect, is this an appropriate model for forecasting the future? It doesn't appear to be. This is a case of "overfitting" the model to the history.

In this example, only the first two models, the ones with the worst fit to history, appear to be reasonable choices for forecasting the near term future. While fit to history is a relevant consideration when selecting a forecasting a model, it should not be the sole consideration.

In the next post we'll look at Nikos' discussion of validation (hold out) samples, and how use of information criteria can help you avoid overfitting.

Post a Comment

Extending SAS Forecast Server Client

SAS® Forecast Server was released in 2005, and remains our flagship forecasting offering. It provides large-scale automatic forecasting, creating an appropriate customized forecasting model for each point in an organization's forecasting hierarchy.

The original SAS® Forecast Studio GUI lets users point and click their way to loading data, defining the hierarchy, specifying the roles of variables, identifying events, automatically creating the models, and generating the forecasts. (This 5-minute SAS Forecast Server demonstration illustrates the process, including reviewing the generated forecasts, making manual overrides, and reconciling the adjusted forecasts.)

Since the initial release in 2005, two additional interfaces have been developed to provide more capabilities to forecasters:

SAS® Time Series Studio

Forecasting immediately brings to mind the development of complex models and generation of forecast values, but equally important in the forecasting process is the analytic step prior to generating forecasts – understanding the structure of your time series data.  SAS Time Series Studio is a GUI for the interactive exploration and analysis of large volumes of time series data prior to forecasting. (This Analytics2012 presentation illustrates the importance of understanding the structure of your time series data in generating forecasts.)

SAS Time Series Studio provides forecast analysts with tools for identifying data problems, including outlier detection and management, missing values, and date ID issues. In addition, basic and advanced time series characterization, segmentation of the data into subsets and structural manipulation of the collective time series (hierarchy exploration) all contribute to faster forecast implementation and better modeling due to increased understanding of the data.

SAS® Forecast Server ClientForecast Server Client Workflow Diagram

Forecasters may need access to their system anywhere and anytime, and now they have it with SAS Forecast Server Client. This web interface, released last year, provides a standard forecasting workflow to address the majority of users. It also integrates capabilities from the Forecast Studio and Time Series Studio GUIs.

As an example, SAS Forecast Server Client supports rules-based segmentation of time series to facilitate different modeling strategies per segment. And it provides enhanced tracking of forecasting performance. (Learn more in this time series segmentation white paper and video.)

One of the great benefits of SAS forecasting software is that it is just one part of the overall SAS system. SAS gives users a powerful programming language, and all sorts of tools for data management, visualization, analysis, reporting -- and pretty much anything else.

It is a common practice for SAS Forecast Server users to first set up their forecasting projects through the GUI. This generates SAS code that users can access, tweak, and schedule into a batch forecasting process. All items can be forecast automatically, with little interaction from the forecast analyst. But the analyst may want to put extra effort into particularly high value forecasts, to squeeze out every last percentage point of forecast accuracy. They can do this using a wealth of SAS procedures, and by writing their own SAS code.

Extending SAS® Forecast Server Client

Photo of Alex ChienIn the most recent technical paper from SAS forecasting R&D, Alex Chien (Director, Advanced Analytics R&D) has written on  Extending SAS® Forecast Server Client. This paper shows how to write plug-ins to extend the capabilities of SAS Forecast Server. It shows how to use Lua as the interface between SAS Forecast Server Client and the plug-ins, and gives code examples for both a segmentation strategy and a modeling strategy plug-in.

You can think of a plug-in as a macro with a list of arguments that control how the macro functions on data or instructions. The new LUA procedure (available since the 3rd maintenance release of SAS 9.4) is a standard Lua interface between SAS Forecast Server and the plug-ins. (Find more information about programming in Lua at https://www.lua.org/pil/contents.html.)

This new paper will be of keen interest to SAS forecasting customers. It is full of code examples (in both Lua and SAS), showing step-by-step how to enhance their SAS Forecast Server implementation by creating their own plug-ins.

Post a Comment

Practical advice for better business forecasting

SAS® Insights is a section of the sas.com website devoted to being "your top source for analytics news and views." It contains articles, interviews, research reports, and other content from both SAS and non-SAS contributors. In a new article posted this week, we added three short videos containing practical advice for better business forecasting.

Practical Advice for Better Business Forecasting

The three videos introduce articles from the book Business Forecasting: Practical Problems and Solutions:

Business Forecasting book cover

The Disturbing 52% discusses recent research by Steve Morlidge of CatchBull Ltd. In his sample of eight supply chain companies in the UK, he found that over half of their forecasts were less accurate than a naive "no change" model.

Forecasting Performance Benchmarks - Answers or Not? looks at Stephan Kolassa's critique of the validity and usefulness of forecast accuracy surveys and benchmarks. Kolassa concludes that such "external" benchmarking may be futile.

FVA: Get Your Reality Check Here shows how Forecast Value Added analysis can identify forecasting activities that waste resources by failing to improve the forecast. Using FVA, many companies have found steps in their forecasting process that just make the forecast worse!

Foresight Practitioner Conference -- for more practical advice

In conjunction with the International Institute of Forecasters and the Institute for Advanced Analytics at North Carolina State University, the 2016 Foresight Practitioner Conference will be held in Raleigh, NC (October 5-6, 2016) with the theme of:

Worst Practices in Forecasting: Today's Mistakes to Tomorrow's Breakthroughs

This is the first ever conference dedicated entirely to the exposition of bad forecasting practices (and the ways to remedy them). I am co-chairing the event along with Len Tashman, editor-in-chief of Foresight. Our "worst practices" theme reflects an essential principle:

The greatest leap forward for our forecasting functions lies not in squeezing an extra trickle of accuracy from our methods and procedures, but rather in recognizing and eliminating practices that do more harm than good.

As discussed so many times in this blog, we often shoot ourselves in the foot when it comes to forecasting. We spend vast amounts of time and money building elaborate systems and processes, while almost invariably failing to achieve the level of accuracy desired. Organizational politics and personal agendas contaminate what should be an objective, dispassionate, and largely automated process.

At this conference you'll learn what bad practices to look for at your organization, and how to address them. I'll deliver the introductory keynote: Worst Practices in Forecasting, and the rest of the speaker lineup includes:

  • Len Tashman - Foresight Founding Editor and Director of the Center for Business Forecasting: Forecast Accuracy Measurement: Pitfalls to Avoid, Practices to Adopt.
  • Paul Goodwin, coauthor of Decision Analysis for Management Judgment and Professor Emeritus of Management Science, University of Bath: Use and Abuse of Judgmental Overrides to Statistical Forecasts.
  • Chris Gray - coauthor of Sales & Operations Planning - Best Practices: Lessons Learned and principal of Partners for Excellence:  Worst Practices in S&OP and Demand Planning.
  • Steve Morlidge - coauthor of Future Ready: How to Master Business Forecasting and former Finance Director at Unilever: Forecasting Myths - and How They Can Damage Your Health.
  • Wallace DeMent - Demand Planning Manage at Pepsi Bottling, responsible for forecast, financial and data analysis: Avoiding Dangers in Sales Force Input to the Forecasts.
  • Anne Robinson - Executive Director, Supply Chain Strategy and Analytics, Verizon Wireless and 2014 President of the Institute for Operations Research and the Management Sciences (INFORMS): Forecasting and Inventory Optimization: A Perfect Match Except When Done Wrong.
  • Erin Marchant - Lead Data and Systems Management Analyst, Moen: Worst Practices in Forecasting Software Implementation.
  • Fotios Petropoulos - Assistant Professor, Cardiff University: The Bad and The Good in Software Practices for Judgmental Selection of Forecast Models.

Take advantage of the Early Bird registration and save $200 through June 30.

Post a Comment

Rob Hyndman on Time-Series Cross-Validation

SAS Viya logoSometimes one's job gets in the way of one's blogging. My last three months have been occupied with the launch of SAS® Viya™, our next-generation high-performance and visualization architecture.

Please take the time to find more information on the SAS Viya website, and apply for a free preview.

Rob Hyndman on Time-Series Cross-Validation

Back in March we looked at Rob Hyndman's article on "Measuring Forecast Accuracy" that appears in the new book Business Forecasting: Practical Problems and Solutions.

Hyndman discussed the use of training and test datasets to evaluate performance of a forecasting model, and we showed the method of time-series cross-validation for one-step ahead forecasts.

Training data and Test dataTime-series cross-validation is used when there isn't enough historical data to hold out a sufficient amount of test data. (We typically want 20% of the history (the most recent observations) for the test data -- and at least enough to cover the desired forecasting horizon.)

In real life, we often have to forecast more than just one step ahead. For example, if there is nothing we can do to impact supply or demand over the next two weeks, then the 3-week ahead forecast is of interest.

Hyndman shows how the time-series cross-validation procedure based on a rolling forecast origin can be modified to allow multistep errors to be used.

Suppose we have a total of T observations (e.g., 48 months of historical sales), and we need k observations to produce a reliable forecast. (In this case, k might be 36 months for our training dataset, with the most recent 12 months as our test dataset.) If we want models that produce good h-step-ahead forecasts, the procedure is:

  1. Select the observations at time k + h + i - 1 for the test set, and use the observations at times 1, 2, ..., k + i - 1 to estimate the forecasting model. Compute the h-step error on the forecast for time k + h + i - 1.
  2. Repeat the above step for i = 1, 2, ..., T - k - h + 1 where T is the total number of observations.
  3. Compute the forecast accuracy measures based on the errors obtained. When h = 1, this gives the same procedure as the one-step ahead forecast in the previous blog.

Previously, my colleague Udo Sglavo has shown how to implement Hyndman's method in SAS® Forecast Server in this two-part blog post (Part 1, Part 2).

Post a Comment

Guest blogger: Len Tashman previews spring issue of Foresight

Editor Len Tashman's Preview of the Spring Issue of Foresight

Image of Len Tashman

Len Tashman Editor-in-Chief

Misbehaving, the feature section of this 41st issue of Foresight, was prompted by the publication of Richard Thaler’s eye-opening book of the same title, a work that explains the often surprising gap between (a) the models we use and organizational policies derived from them, and (b) the outcomes we just didn’t expect. Models assume that economic actors (owner, managers, consumers) behave rationally, working purposefully toward a clearly defined goal. But actors misbehave, their actions sidetracked by seemingly irrelevant factors, and their self-serving interests can undermine organizational efforts.

Our examination of misbehaving begins with my review of Thaler’s book and description of the potential insights for the forecasting profession. The review is followed by some brief reflections from Foresight’s editors on the problems of misbehaving in their particular branches of our field and what we can hope to do about these misbehaviors.

  • Paul Goodwin, Hot New Research Editor, discusses Misbehaving Agents, those organization forecasters and planners who subvert organizational goals.
  • Roy Batchelor, Financial Forecasting Editor, examines Misbehavior in Forecasting Financial Markets.
  • John Mello, Supply Chain Forecasting Editor, describes policies for Eliminating Sales-Forecasting Misbehavior.
  • Lastly, Fotios Petropoulos, FSS Forecasting Editor, and colleague Kostas Nikolopoulos reveal Misbehaving, Misdesigning, and Miscommunicating in our forecasting support systems.

Although not usually described as misbehaving, overreliance on spreadsheets as forecasting support tools has been a continuing problem. Henry Canitz takes companies to task for use of primitive tools in Overcoming Barriers to Improving Forecasting Capabilities. And Nari Viswanathan comments on the limitations of traditional S&OP technology tools.

Continuing a series of articles on forecast-accuracy metrics, Steve Morlidge proposes several creative graphical and tabular displays for Using Error Analysis to Improve Forecast Performance.

Our Forecaster in the Field interview is with Mark Blessington, noted sales-forecasting author, whose article “Sales Quota Accuracy and Forecasting” appeared in our Winter 2016 issue.

Wrapping up the Spring 2016 issue is the first in a series of commentaries examining the gap between forecasting research and forecasting practice. In Forecasting: Academia vs. Business, Sujit Singh argues that while much research has been devoted to the statistical and behavioral basis of forecasting as well as to the application of big data, missing is research relevant to business performance. By measuring the benefits of good forecasting, this type of research can provide the most value to the business user.

Upcoming in the next issue of Foresight are articles on how to improve safety-stock calculations, the need for more structured communications in S&OP, and a feature section on the issue of the gap between forecasting research in academia and the forecasting research that business would like to see.

Foresight Practitioner Conference: Worst Practices in Forecasting

Registration has begun for the 2016 Foresight Practitioner Conference, Worst Practices in Forecasting: Today’s Mistakes to Tomorrow’s Breakthroughs, which will be held October 5-6 at North Carolina State University in Raleigh in partnership with their Institute for Advanced Analytics.

See the conference announcement – and take advantage of the very generous registration fees for Foresight readers who sign up early!

Post a Comment

Rob Hyndman on measuring forecast accuracy

Business Forecasting book coverThe new book Business Forecasting: Practical Problems and Solutions contains a large section of recent articles on forecasting performance evaluation and reporting. Among the contributing authors is Rob Hyndman, Professor of Statistics at Monash University in Australia.

To anyone needing an introduction, Hyndman's credentials include:

Editor-in-chief of International Journal of Forecasting

Board of Directors of the International Institute of Forecasters

Co-author (with Makridakis and Wheelwright) of the classic textbook Forecasting: Methods and Applications (3rd edition)

Drawing from his online textbook Forecasting: Principles and Practice (coauthored with George Athanasopoulos), Hyndman explains the use of Training and Test sets for measuring forecast accuracy.

Rob Hyndman on Measuring Forecast Accuracy

Photo of Rob Hyndman

Rob Hyndman

A common "worst practice" is to select forecasting models based solely on their fit to the history that was used to construct them. As Hyndman points out, "A model that fits the data well does not necessarily forecast well."

Unscrupulous consultants or forecasting software vendors can wow impressionable customers with models that closely (or even perfectly) fit their history. Yet fit to history provides little indication of how well the model will actually forecast the future.

Hyndman's article discusses scale-dependent errors (e.g., MAE and RMSE), percentage errors (MAPE), and scaled errors (MASE), and the situations in which each are appropriate. Note that scaled errors are a relatively new type of forecasting performance metric, first proposed by Hyndman and Koehler in 2006. [Find more details in their article: Another look at measures of forecast accuracy. International Journal of Forecasting 22(4), 679-688.]

To avoid the worst practice mentioned above, Hyndman suggests dividing history into "training data" (used to estimate the model) and "test data" (used to evaluate forecasts generated by the model). The test data is often referred to as a "hold out sample," and this is a well-recognized approach.

Training data and Test data

 

 

When there is sufficient history, about 20% of the observations (the most recent) should be "held out" to serve as test data. The test data should be at least as large as the maximum forecast horizon required. So hold out 12 months (or 52 weeks) if you have to forecast one year out.

Unfortunately, we often don't have enough historical data for the recommended amount of test data. So for shorter time series, Hyndman illustrates the method of time series cross-validation, in which a series of training and test sets are used.

Time Series Cross-Validation

This approach uses many different training sets, each one containing one more observation then the previous one. Suppose you want to evaluate the one-step ahead forecasts.

Evaluating on-step ahead forecasts

As we see in this diagram, each training set (black dots) contains one more observation than the previous one, and each test set (gray dots) contains the next observation. The light gray dots are ignored. Per Hyndman:

Suppose k observations are required to produce a reliable forecast. Then the process works as follows.

  1. Select the observation at time k+i for the test set, and use the observations at times 1, 2,…, k+i − 1 to estimate the forecasting model. Compute the error on the forecast for time k+i.
  2. Repeat the above step for i = 1, 2,…, Tk where T is the total number of observations.
  3. Compute the forecast accuracy measures based on the errors obtained.

So if you had T=48 months of historical data and used the first 36 as your test set, then you would have 12 one-step ahead forecasts over which to compute your model's accuracy.

In the next post, we'll look at the more general (and more common) situation where you are interested in models that produce good h-step-ahead forecasts (for example, for 3 months ahead).

Post a Comment

Model interpretability in forecasting

"The Role of Model Interpretability in Data Science" is a recent post on Medium.com by Carl Anderson, Director of Data Science at the fashion eyeware company Warby Parker. Anderson argues that data scientists should be willing to make small sacrifices in model quality in order to deliver a model that is easier to interpret and explain, and is therefore more acceptable to management.

Can we make this same argument in business forecasting?

What is Meant by Model Interpretability?

A model that is readily understood by humans is said to be interpretable.

An example is a forecast based on the 10-week moving average of sales. Management may not agree with the forecast that is produced, but at least everyone understands how it was calculated. And they may make forecast overrides based on their belief that next week's sales will be higher (or lower) than the recent 10-week average.

A trend line model is also easily understood by management. While the mathematical calculations are more complicated, everyone understands the basic concept that "the current growth (or decline) trend will continue."

Anderson observes that an interpretable model is not necessarily less complex than an uninterpretable model.

He uses the example of a principal components model he created to predict new product demand. But principal components is not something readily explainable to his business audience. So he ended up creating a more complex model (i.e., having more variables) that had no additional predictive power, but could be understood by his users.

...the idea of my model is to serve as an additional voice to help them [demand planners] make their decisions. However, they need to trust and understand it, and therein lies the rub. They don’t know what principal component means. It is a very abstract concept. I can’t point to a pair of glasses and show them what it represents because it doesn’t exist like that. However, by restricting the model to actual physical features, features they know very well, they could indeed understand and trust the model. This final model had a very similar prediction error profile — i.e., the model was basically just as good — and it yielded some surprising insights for them.

Increasing model complexity (for no improvement in model performance) is antithetical to everything we are taught about doing science. Whether referred to as Occam's Razor, the Principle of Parsimony, or something else, there is always a strong preference for simpler models.

But Anderson makes a very good point -- a point particularly well taken for business forecasters. We tend to work in a highly politicized environment. Management already has an inclination to override our model-generated forecasts with whatever they please. If our numbers are coming out of an inscrutable black box, they may be even less inclined to trust our work.

Anderson concludes with reasons why a poorer/more complex, but interpretable, model may be favored:

  • Interpretable models can be understand by business decision makers, making them more likely to be trusted and used.
  • Interpretable models may yield insights.
  • As interpretable models build trust in the model builder, this may allow more sophisticated approaches in the future.
  • As long as the interpretable model performs similar enough to the better (but uninterpretable) model, you aren't losing much.

While this may not be true in all areas of predictive analytics, a curious fact is that simpler models tend to do better at forecasting. And even a model as simple as the naïve "no change" model performed better than half of the forecasts in a Steve Morlidge study reported in Foresight.

So thankfully, interpretability and performance are not always at odds. But it remains a challenge to keep participants from overriding the forecast with their biases and personal agendas, and just making it worse.

 

Post a Comment

Guest blogger: Len Tashman previews Winter issue of Foresight

Editor Len Tashman's preview of the Winter 2016 issue of Foresight

Image of Len Tashman

Len Tashman Editor-in-Chief of Foresight

This 40th issue of Foresight begins with a review of the new book by Philip Tetlock and Dan Gardner with the enticing title Superforecasting: The Art and Science of Prediction. Reviewer Steve Morlidge explains that

…the “superforecasters” of the title are those individuals who consistently outperformed other members of Tetlock’s team, and the book sets out to answer the question, “What makes these people so effective as forecasters?”

Perhaps no issue has received more attention in the forecasting literature than that of the relative merits of simple vs. complex forecasting methods. Although the definitions of simple and complex have varied, many studies report evidence that simple methods are often as accurate – and occasionally more accurate – than more complex counterparts.

Now, the two articles in our section on Forecasting Principles and Methods help us to better understand why simplicity in method choice can be a virtue.

In our first piece, Bias-Variance Trade-offs in Demand Forecasting, Konstantinos Katsikopoulos and Aris Syntetos begin by illustrating how a forecast-accuracy metric can be decomposed into two distinct attributes: bias, or a tendency to systematically over- or under-forecast; and variance, the magnitude of fluctuation in the forecasts. They then illustrate that

…simple methods tend to have large bias but lower variance, while complex methods have the opposite tendency: small bias but large variance. So we might prefer a method with smaller variance even if it has larger bias: that is, we can reduce the (total error) by replacing an unbiased but high-variance forecast method with a biased but low-variance forecast method. More generally, we should seek the right amount of complexity.

Stephan Kolassa, for his article in this section, not only endorses their argument but goes a step further to show why Sometimes It’s Better to Be Simple than Correct:

…correct models—those that include all the important demand-influencing factors—can yield bad forecasts. What’s surprising is that a correct model can yield systematically worse forecasts than a simpler, incorrect model!

The bottom line is that complexity carries dangers, and so it is particularly unwise to tweak out a small increase in model performance—a practice to which we often succumb.

Foresight has devoted much space in the past five years to descriptions and evaluations of efforts to promote Collaborative Forecasting and Planning, such as sales and operations planning and information sharing across supply-chain partners. But many firms report they’re not satisfied with the results, that the integration they seek across functional areas—forecasting, sales, marketing, operations, finance—has not occurred, and that the often expensive systems they’ve installed have not overcome the functional silos that impede achievement of promoting company-wide objectives. Dean Sorensen draws upon his decades of experience in advising firms on integrated planning to offer an explanation for this corporate dissatisfaction. He observes that

…as (organizational) complexity rises, capability gaps are exposed in processes that are supported by separate S&OP, financial planning, budgeting, and forecasting applications. What’s missing is a planning and forecasting process that breaks down functional silos by integrating strategic, financial, and operational processes, and extending beyond manufacturing to broader supply chain, selling, general, and administrative activities.

New, integrative technologies are emerging, and Dean describes how these technology innovations provide incremental capabilities that stand-alone S&OP and financial planning, budgeting, and forecasting applications do not.

Dean discusses his experience in integrative planning in our Forecaster in the Field interview.

In our section on Forecasting Practice, sales-forecasting specialist Mark Blessington challenges the conventional systems for setting sales quotas, which are based on annual business plans with sales targets for the corporation and its divisions and territories. In Sales Quota Accuracy and Forecasting, he reports evidence that

…quotas are better set on a quarterly rather than annual basis, and quarterly exponential smoothing methods yield far more accurate quotas than traditional quota-setting methods. However, firms must anticipate implementation barriers in converting from annual to quarterly quotas, as well as the possibility that sales representatives may try to game the system by delaying sales orders to maximize future bonus payouts.

Our Strategic Forecasting section addresses forces that transcend short-term forecasts to look beyond the current planning horizons. TechCast Global principals William Halal and Owen Davies present TechCast’s Top Ten Forecasts of technological innovations and social trends over the next 15 years and beyond.

They see several disruptive technological developments:

(1) Artificially intelligent machines will take over 30% of routine mental tasks. (2) Major nations will take firm steps to limit climate-change damage. (3) Intelligent cars will make up 15% of vehicles in less than 10 years. (4) The “Internet of Things” will expand rapidly to connect 30% of human artifacts soon after 2020.

Tune to page 50 of this issue for the rest of their top ten.

Post a Comment

New book: Business Forecasting

Business Forecasting book coverAnnouncing New Book: Business Forecasting

Just in time for the new year, Business Forecasting: Practical Problems and Solutions compiles the field's most important and thought provoking new literature into a single comprehensive reference for the business forecaster.

So says the marketing literature.

The real story? The book does pretty much that.

With my co-editors Len Tashman (editor-in-chief of Foresight: The International Journal of Applied Forecasting) and Udo Sglavo (Sr. Director of Predictive Modeling R&D at SAS), we've assembled 49 articles from many of our favorite, and most influential authors of the last 15 years.

As we state in the opening commentary of chapter 1:

Challenges in business forecasting, such as increasing accuracy and reducing bias, are best met through effective management of the forecasting process. Effective management, we believe, requires an understanding of the realities, limitations, and principles fundamental to the process. When management lacks a grasp of basic concepts like randomness, variation, uncertainty, and forecastability, the organization is apt to squander time and resources on expensive and unsuccessful fixes. There are few endeavors where so much money has been spent, with so little payback.

Through the articles in this collection, and the accompanying commentary, we facilitate exploration of these basic concepts, and exploration of the realities, limitations, and principles of the business forecasting process. Throughout 2016 this blog will highlight many of the included articles.

The four main sections cover:

  • Fundamental Considerations in Business Forecasting
  • Methods of Statistical Forecasting
  • Forecasting Performance Evaluation and Reporting
  • Process and Politics of Business Forecasting

The book is now available from the usual suspects (which includes anywhere that Pulitzer (if not Nobel) quality literature is sold):

The publisher has graciously provided a free sample chapter, so please take a look.

Post a Comment
  • About this blog

    Michael Gilliland is a longtime business forecasting practitioner and currently Product Marketing Manager for SAS Forecasting. He initiated The Business Forecasting Deal to help expose the seamy underbelly of the forecasting practice, and to provide practical solutions to its most vexing problems.

    Mike is also the author of The Business Forecasting Deal, and co-editor of Business Forecasting: Practical Problems and Solutions. He also edits the Forecasting Practice section of Foresight: The International Journal of Applied Forecasting.
  • Subscribe to this blog

    Enter your email address:

    Other subscription options

  • Archives