2016 SAS/IIF forecasting research grant

For the fourteenth year, the International Institute of Forecasters, in collaboration with SAS®, is proud to announce financial support for research on how to improve forecasting methods and business forecasting practice. The award for the 2016-2017 year will be two $5,000 grants, in Business Applications and Methodology.

Criteria for the award of the grant will include likely impact on forecasting methods and business applications. Consideration will be given to new researchers in the field and whether supplementary funding is possible.

Applications must include:

  • Description of the project (max. 4 pages)
  • Letter of support from the home institution where the researcher is based.
  • Brief (max. 4 page) c.v.
  • Budget and work-plan for the project.

The deadline for applications is September 30, 2016. 

For a complete overview of the requirements for the award, click here.

All applications or inquiries should be sent to IIF Business Director (pamstroud@forecasters.org).

The IIF-SAS grant was created in 2002 by the IIF, with financial support from the SAS Institute, in order to promote research on forecasting principles and practice. The fund provided amounts of US $10,000 per year, which is divided to support research in the two basic aspects of forecasting: development of theoretical results and new methods, and practical applications with real-world comparisons.

A list of previous grant recipients and their research is listed on the IIF website.

Post a Comment

The online SAS forecasting community

The online SAS Support Communities are a vibrant source of information and interaction for over 90,000 registered participants. Here you can ask (and answer) questions, grow (and share) your SAS expertise, and explore these collection points for other resources (like hot tips, articles, blogs, and events) relating to the community topic.

SAS Forecasting and Econometrics is one of over 30 such communities currently available on the SAS support pages.

This site should be visited and bookmarked by every user of SAS forecasting software. It contains answers on over 500 subjects, making it a great first stop when you have a coding or modeling question. (See my colleague Chris Hemedinger's informative blog about "How SAS Support Communities can expedite your tech support experience.")

Kinds of Forecasting and Econometrics Subjects

To give you a flavor for the subject matter in the community, here are some examples:

Most questions deal specifically with coding and modeling within SAS forecasting software. However, the community is also helpful for addressing broader questions on forecasting process, and for other forecasting-related announcements, such as:

Where to Begin

Lurking in your chosen community is perfectly acceptable. But I would encourage you to overcome any shyness, and to go ahead and post your questions (and answers).

Communities function best when there are lots of active participants, and the SAS communities are active. Chris Hemedinger cited recent statistics that 62% of questions get their first response within 60 minutes of posting! And 92% get responses within the first day.

Community moderators blast any unanswered questions to a corps of volunteer responders from SAS R&D, professional services, product management, and marketing. So there is a very good chance any questions you have will be answered promptly, and correctly, by an expert in the subject matter.

Post a Comment

Guest blogger: Len Tashman previews summer issue of Foresight

Editor Len Tashman's Preview of the Summer Issue of Foresight

Clarity and effectiveness of communication are key to success – so says a recent survey by the National Association of Business Economists (NABE), which reported that industry leaders and hiring managers considered communication skills to be the single most important attribute for the promotion of business economists. While we are not aware of a similar survey among demand forecasters and planners, we do have ample reason to believe that the importance of communication skills cannot be exaggerated – this according to the perspectives of numerous Foresight authors from the world of industry and commerce.

Now, Niels van Hove, forecasting practitioner, consultant, and behavioral coach, argues persuasively for the establishment of an effective communications process as a prime element in the implementation of a firm’s strategy. In his article, An S&OP Communication Plan: The Final Step in Support of Company Strategy, Niels explains that

[t]o effectively support strategy execution, S&OP communication needs to do more than fix operational issues. Formal horizontal communication—information flow across functions at the same level in the business hierarchy—is no longer sufficient; S&OP has to support two-way vertical and informal communication in support of a strategy, not just to inform employees but to align, engage, energize, and refocus them.

Now a resident of Melbourne, Australia, Niels is the subject of our Forecaster in the Field interview on page 11.

Perhaps the greatest concern we have today about future employment prospects is over the impact of exponential advances in artificial intelligence and robotics. Ira Sohn, Foresight’s Editor for Strategic Forecasting, says Step Aside, Climate Change – Get Ready for Mass Unemployment. Ira surveys opinion on this issue with emphasis on two important new books published in 2015: Rise of the Robots: Technology and the Threat of a Jobless Future by Martin Ford, and Humans Need Not Apply: A Guide to Wealth and Work in the Age of Artificial Intelligence by Jerry Kaplan. These strategic thinkers paint a troubling picture for job markets, and recommend policies for adaptation that fundamentally alter the prevailing ethic of “work for your pay.”

Bridging the Gap Between Academia and Business

The feature section of this 42nd issue of Foresight addresses the very apparent disconnect between (a) the needs of business forecasters and planners, and (b) the research undertaken and published by academics in the field. Our Spring 2016 issue concluded with an “editorial” by Sujit Singh, Forecasting: Academia vs. Business, which we reprint here for your convenience. Sujit’s piece has attracted considerable attention, and in this issue we now present commentaries from both sides of the aisle, which probe further into the basis of the disconnect and what can be done to help bridge the gap.

If there is one point of near universal agreement, it is that while considerable fault lies with academia’s inattention to the realities of the business world, companies put up their own formidable obstacles, as John Boylan and Aris Syntetos see it in their lead commentary, It Takes Two to Tango.

Some persuasive examples of how academics have begun to address issues of importance in industry were offered at the recent International Symposium on Forecasting in Santander, Spain. The presentations in the Forecasting in Practice Track demonstrated how recent research offers great promise for advancing the tools of the trade as well as enhancing the drive to improve forecast accuracy and other key performance metrics.

Here were the nine presentations in the Forecasting in Practice Track:

  • Trust in Forecasting
  • Building a Demand Planning Function
  • Forecasting, Supply Chain, and Financial Performance
  • Simplicity in Forecasting
  • Forecasting and Inventory Control: Mind the Gap
  • Demand Planning for Military Operations
  • Challenges in Strategic Pharmaceutical Forecasting
  • Beyond Exponential Smoothing: How to Proceed when Extrapolation Doesn’t Work
  • Forecasting Temporal Hierarchies

Santander is one of Spain’s enticing alternatives to the Cote d’Azur. Situated on the Bay of Biscay and bordered by Bilbao and the Basque country to the east with the Pico de Europa mountains and Cantabrian coast to the west, Santander graciously offered up the Palacio de la Magdalena as the conference venue. Built a little over a century ago by the local government of Santander to provide a seasonal residence for the royal family of Spain, the Magdalena Palace was converted to host educational courses in the 1930s and now serves as a major meeting and conference facility.

Further evidence of progress in bridging the research gap between academia and business comes with the upcoming Foresight Practitioner Conference, Worst Practices in Forecasting: Today’s Mistakes to Tomorrow’s Breakthroughs. Held in partnership with and at the venue of the Institute for Advanced Analytics at North Carolina State University in Raleigh, the presentations apply many lessons derived from academic research and business experience that we hope will motivate business forecasters to move forward and away from dysfunctional practices.

For details on the speakers and venue, please see the announcement in this issue and go to https://forecasters.org/foresight/2016-conference/ to register. Please note that Foresight readers receive an attractive discount in the registration fee. Hope to see you there.

Farewell to Rob Dhuyvetter

A member of the Foresight Advisory Board since the start, Rob’s duties at the Simplot Food Group have shifted away from forecasting and demand planning. We wish him the best in his new assignment.

Post a Comment

Paul Goodwin on misbehaving forecasting agents

Paul Goodwin

Paul Goodwin

Paul Goodwin is Professor Emeritus of Management Science at University of Bath, and one of the speakers at this fall's Foresight Practitioner Conference (October 5-6 in Raleigh, NC). His topic will be "Use and Abuse of Judgmental Overrides to Statistical Forecasts"-- an area in which he has contributed much of the important research.

Paul also contributed four articles to the recently published Business Forecasting: Practical Problems and Solutions, authoring or co-authoring:

In the Spring 2016 issue of Foresight, Paul has a new article dealing with "Misbehaving Agents."

The Principal and the Agent

An agent is a person (or organization) that is paid to act on behalf of a principal (another person or organization).

For example, you pay a real-estate agent to sell your house. But questions can arise whether the agent is acting in the best interests of the principal. (Is the agent going for a quick sale (and quick commission), minimizing the effort they have to spend on the sale? Or is the agent trying to maximize the selling price, even if it requires more time and effort?)

The Principal-Agent Problem in Forecasting

The principal-agent problem is well recognized in forecasting. Forecasting process participants (the agents being paid by the organization to develop the forecast) have a variety of interests. Those personal interests often compete with the organization's interest in receiving an honest and accurate forecast of the future. For a familiar example, consider the role of sales people as agents in the forecasting process.

In a June 26 post on SupplyChainShaman.com, industry analyst Lora Cecere stated:

Collaborative sales forecasting input leads to increased bias and error. My advice for the global supply chain leader is not waste your time asking the sales team to forecast.

I came to a similar conclusion in my Foresight article (Fall 2014), "Role of the Sales Force in Forecasting":

In the spirit of "economy of process," unless there is solid evidence that input from the sales force has improved the forecast (to a degree commensurate with the cost of engaging them), we are wasting their time -- and squandering company resources that could better be spent generating revenue.

Even assuming that sales people have special knowledge of future customer demand, there is still a question of their motivation to provide honest input to the forecast. If their sales quota is based on the forecast, it is in the sales person's interest to purposely forecast low (to get an easier-to-achieve quota). Goodwin cites a similar example from his earlier research*, where a marketing department deliberately forecasted low so they would "look good" to senior management by consistently beating it.

[Note: In his taxonomy of common forecasting misbehaviors, John Mello refers to deliberate under-forecasting as sandbagging. For the rest of his taxonomy, see "The Impact of Sales Forecast Game Playing on Supply Chains," Foresight (Spring 2009), 13-22, which also appears in Business Forecasting: Practical Problems and Solutions.]

Not a New Problem

Even back in 1957 (when future President Gerald R. Ford was still watching a lot of baseball on the radio), the problem of forecasting agents was recognized. James H. Lorie's article "Two important problems in sales forecasting" (The Journal of Business Vol. 30, No. 3 (July 1957), pp. 172-179) is so good I had to blog about it three times (Part 1, Part 2, Part 3).

Goodwin looks at other types forecasting related mischief, and reasons behind the misbehaviors. For example, new information may make it advisable to revise a forecast -- but an agent may not do so to avoid appearing wishy-washy or incompetent. Maintaining the appearance of competence is also behind unreasonably high probabilities assigned to forecasts (common among political pundits who get more air time the bolder and more confidently expressed their predictions).

Another interesting type of misbehavior, and a sure way to draw attention to yourself, is anti-herding (providing forecasts deliberately different from everyone else's). Ain't misbehavin' fun?

Solutions for Misbehaving Forecasting Agents

Metrics like Forecast Value Added can help you identify forecasting participants that are just making the forecast worse.

Goodwin suggests that forecast users (like planners and upper management) could be educated to accept the high level of uncertainty that is often unavoidable in forecasting. And it would help for reward systems to be designed to encourage honesty and accuracy (something attempted in the Gonik** system for sales force compensation developed in the 1970's for IBM Brazil).


*Goodwin, P. (1998). Enhancing Judgmental Sales Forecasting: The Role of Laboratory Research. In Wright, G. & Goodwin, P. (eds.), Forecasting with Judgement. Chichester: Wiley 91-111.

**Gonik, J. (1978). Tie Salesmen's Bonuses to Their Forecasts, Harvard Business Review, May-June 1978, 116-122.

 

Post a Comment

Using information criteria to select forecasting model (Part 2)

Let's continue now to Nikolaos Kourentzes' blog post on How to choose a forecast for your time series.

Using a Validation Sample

Nikos first discusses the fairly common approach of using a validation or "hold out" sample.

The idea is to build your model based on a subset of the historical data (the "fitting" set), and then test its forecasting performance over the historical data that has been held out (the "validation" set). For example, if you have four years of monthly sales data, you could build models using the oldest 36 months, and then test their performance over the most recent 12 months.

You might recognize this approach from the recent BFD blog Rob Hyndman on measuring forecast accuracy. Hyndman uses the terminology (which may be more familiar to data miners) of "training data" and "test data." He suggested that when there is enough history, about 20% of the observations (the most recent history) should be held out for the test data. The test data should be at least as large as the forecasting horizon (so hold out 12 months if you need to forecast one year into the future).

Hyndman uses this diagram to show the history divided into training and test data. The unknown future (that we are trying to forecast) is to the right of the arrow:

Training data and Test dataNikos works though a good example of this approach comparing an exponential smoothing and an ARIMA model. Each model is built using only the "fitting" set (the oldest history), and generates forecasts for the time periods covered in the validation set.

How accurately the competing models forecast the validation set can help you decide which type of model is more appropriate for the time series. You could then use the "winning" model to forecast the unknown future periods. An obvious drawback is that your forecasting model has only used the older fitting data, essentially sacrificing the more recent data in the validation set.

Another alternative, once you've use the above approach to determine which type of model is best, is to rebuild the same type of model based on the full history (fitting + validation sets).

Using Information Criteria

Information criteria (IC) provide an entirely different way of evaluating model performance. It is well recognized that more complex models can be constructed to fit the history better. In fact, it is always possible to create a model that fits a time series perfectly. But our job as forecasters isn't to fit models to history -- it is to generate reasonably good forecasts of the future.

As we saw in the previous BFD post, overly complex models may "overfit" the history, and actually generate very inappropriate forecasts.

Cubic model

An example of overfitting the model to history

Measures like Akaike's Information Criterion (AIC) help us avoid overfitting by balancing goodness of fit with model complexity (penalizing more complex models). Nikos provides a thorough example showing how the AIC works. As he points out, "The model with the smallest AIC will be the model that fits best to the data, with the least complexity and therefore less chance of overfitting."

Another benefit of the AIC is that it uses the full time series history, there is no need for separate fitting and validation sets. But a drawback is that AIC cannot be used to compare models from different model families (so you could not do the exponential smoothing vs. ARIMA comparison shown above). There is plenty of literature on the AIC so you can find more details before employing it.

Combining Forecasts

Nikos ends his post with a great piece of advice on combining models. Instead of struggling to pick a single best model for a given time series, why not just take an average of several "appropriate" models, and use that as your forecast?

There is growing evidence that combining forecasts can be effective at reducing forecast errors, while being less sensitive to the limitations of a single model. SAS® Forecast Server is one of the few commercial packages that readily allows you to combine forecasts from multiple models.

Post a Comment

Using information criteria to select forecasting model (Part 1)

Photo of Nikolaos Kourentzes

Dr. Nikolaos Kourentzes Forecasting Professor with great hair

Nikolaos Kourentzes is Associate Professor at Lancaster University, and a member of the Lancaster Centre for Forecasting. In addition to having a great head of hair for a forecasting professor, Nikos has a great head for explaining fundamental forecasting concepts.

In his recent blog post on How to choose a forecast for your time series, Nikos discusses the familiar validation (or "hold out") sample method, and the less familiar approach of using information criteria (IC). Using an IC helps you avoid the common problem of "overfitting" a model to the history. We can see what this means by a quick review of the time series forecasting problem.

The Time Series Forecasting Problem

A time series is a sequence of data points taken across equally spaced intervals. In business forecasting, we usually deal with things like weekly unit sales, or monthly revenue. Given the historical time series data, the forecaster's job is to predict the values for future data points. For example, you may have three years of weekly sales history for product XYZ, and need to forecast the next 52 weeks of sales for use in production, inventory, and revenue planning.

There is much discussion on the proper way to select a forecasting model. It may seem obvious to just choose a model that best fits the historical data. But is that such a smart thing?

Unfortunately, having a great fit to history is no guarantee that a model will be any good at forecasting the future. Let's look at a simple example to illustrate the point:

Suppose you have four weeks of sales: 5, 6, 4, and 7 units. What model should you choose to forecast the next several weeks? A simple way to start with with an average of the 4 historical data points:

Weekly mean model

Model 1: Weekly Mean

We see this model forecasts 5.5 units per week into the future, and there is an 18% error in the fit to history. Can we do better? Let's try a linear regression to capture any trend in the data:

Linear regression model

Model 2: Linear Regression

This model forecasts increasing sales, and the historical fit error has been reduced to 15%. But can we do better? Let's now try a quadratic model:

Quadratic model

Model 3: Quadratic

The quadratic model cuts the historical fit error in half to 8%, and it is also showing a big increase in future sales. But we can do even better with a cubic model:

Cubic model

Model 4: Cubic

It is always possible to find a model that fits your time series history perfectly. We have done so here with a cubic model. But even though the historical fit is perfect, is this an appropriate model for forecasting the future? It doesn't appear to be. This is a case of "overfitting" the model to the history.

In this example, only the first two models, the ones with the worst fit to history, appear to be reasonable choices for forecasting the near term future. While fit to history is a relevant consideration when selecting a forecasting a model, it should not be the sole consideration.

In the next post we'll look at Nikos' discussion of validation (hold out) samples, and how use of information criteria can help you avoid overfitting.

Post a Comment

Extending SAS Forecast Server Client

SAS® Forecast Server was released in 2005, and remains our flagship forecasting offering. It provides large-scale automatic forecasting, creating an appropriate customized forecasting model for each point in an organization's forecasting hierarchy.

The original SAS® Forecast Studio GUI lets users point and click their way to loading data, defining the hierarchy, specifying the roles of variables, identifying events, automatically creating the models, and generating the forecasts. (This 5-minute SAS Forecast Server demonstration illustrates the process, including reviewing the generated forecasts, making manual overrides, and reconciling the adjusted forecasts.)

Since the initial release in 2005, two additional interfaces have been developed to provide more capabilities to forecasters:

SAS® Time Series Studio

Forecasting immediately brings to mind the development of complex models and generation of forecast values, but equally important in the forecasting process is the analytic step prior to generating forecasts – understanding the structure of your time series data.  SAS Time Series Studio is a GUI for the interactive exploration and analysis of large volumes of time series data prior to forecasting. (This Analytics2012 presentation illustrates the importance of understanding the structure of your time series data in generating forecasts.)

SAS Time Series Studio provides forecast analysts with tools for identifying data problems, including outlier detection and management, missing values, and date ID issues. In addition, basic and advanced time series characterization, segmentation of the data into subsets and structural manipulation of the collective time series (hierarchy exploration) all contribute to faster forecast implementation and better modeling due to increased understanding of the data.

SAS® Forecast Server ClientForecast Server Client Workflow Diagram

Forecasters may need access to their system anywhere and anytime, and now they have it with SAS Forecast Server Client. This web interface, released last year, provides a standard forecasting workflow to address the majority of users. It also integrates capabilities from the Forecast Studio and Time Series Studio GUIs.

As an example, SAS Forecast Server Client supports rules-based segmentation of time series to facilitate different modeling strategies per segment. And it provides enhanced tracking of forecasting performance. (Learn more in this time series segmentation white paper and video.)

One of the great benefits of SAS forecasting software is that it is just one part of the overall SAS system. SAS gives users a powerful programming language, and all sorts of tools for data management, visualization, analysis, reporting -- and pretty much anything else.

It is a common practice for SAS Forecast Server users to first set up their forecasting projects through the GUI. This generates SAS code that users can access, tweak, and schedule into a batch forecasting process. All items can be forecast automatically, with little interaction from the forecast analyst. But the analyst may want to put extra effort into particularly high value forecasts, to squeeze out every last percentage point of forecast accuracy. They can do this using a wealth of SAS procedures, and by writing their own SAS code.

Extending SAS® Forecast Server Client

Photo of Alex ChienIn the most recent technical paper from SAS forecasting R&D, Alex Chien (Director, Advanced Analytics R&D) has written on  Extending SAS® Forecast Server Client. This paper shows how to write plug-ins to extend the capabilities of SAS Forecast Server. It shows how to use Lua as the interface between SAS Forecast Server Client and the plug-ins, and gives code examples for both a segmentation strategy and a modeling strategy plug-in.

You can think of a plug-in as a macro with a list of arguments that control how the macro functions on data or instructions. The new LUA procedure (available since the 3rd maintenance release of SAS 9.4) is a standard Lua interface between SAS Forecast Server and the plug-ins. (Find more information about programming in Lua at https://www.lua.org/pil/contents.html.)

This new paper will be of keen interest to SAS forecasting customers. It is full of code examples (in both Lua and SAS), showing step-by-step how to enhance their SAS Forecast Server implementation by creating their own plug-ins.

Post a Comment

Practical advice for better business forecasting

SAS® Insights is a section of the sas.com website devoted to being "your top source for analytics news and views." It contains articles, interviews, research reports, and other content from both SAS and non-SAS contributors. In a new article posted this week, we added three short videos containing practical advice for better business forecasting.

Practical Advice for Better Business Forecasting

The three videos introduce articles from the book Business Forecasting: Practical Problems and Solutions:

Business Forecasting book cover

The Disturbing 52% discusses recent research by Steve Morlidge of CatchBull Ltd. In his sample of eight supply chain companies in the UK, he found that over half of their forecasts were less accurate than a naive "no change" model.

Forecasting Performance Benchmarks - Answers or Not? looks at Stephan Kolassa's critique of the validity and usefulness of forecast accuracy surveys and benchmarks. Kolassa concludes that such "external" benchmarking may be futile.

FVA: Get Your Reality Check Here shows how Forecast Value Added analysis can identify forecasting activities that waste resources by failing to improve the forecast. Using FVA, many companies have found steps in their forecasting process that just make the forecast worse!

Foresight Practitioner Conference -- for more practical advice

In conjunction with the International Institute of Forecasters and the Institute for Advanced Analytics at North Carolina State University, the 2016 Foresight Practitioner Conference will be held in Raleigh, NC (October 5-6, 2016) with the theme of:

Worst Practices in Forecasting: Today's Mistakes to Tomorrow's Breakthroughs

This is the first ever conference dedicated entirely to the exposition of bad forecasting practices (and the ways to remedy them). I am co-chairing the event along with Len Tashman, editor-in-chief of Foresight. Our "worst practices" theme reflects an essential principle:

The greatest leap forward for our forecasting functions lies not in squeezing an extra trickle of accuracy from our methods and procedures, but rather in recognizing and eliminating practices that do more harm than good.

As discussed so many times in this blog, we often shoot ourselves in the foot when it comes to forecasting. We spend vast amounts of time and money building elaborate systems and processes, while almost invariably failing to achieve the level of accuracy desired. Organizational politics and personal agendas contaminate what should be an objective, dispassionate, and largely automated process.

At this conference you'll learn what bad practices to look for at your organization, and how to address them. I'll deliver the introductory keynote: Worst Practices in Forecasting, and the rest of the speaker lineup includes:

  • Len Tashman - Foresight Founding Editor and Director of the Center for Business Forecasting: Forecast Accuracy Measurement: Pitfalls to Avoid, Practices to Adopt.
  • Paul Goodwin, coauthor of Decision Analysis for Management Judgment and Professor Emeritus of Management Science, University of Bath: Use and Abuse of Judgmental Overrides to Statistical Forecasts.
  • Chris Gray - coauthor of Sales & Operations Planning - Best Practices: Lessons Learned and principal of Partners for Excellence:  Worst Practices in S&OP and Demand Planning.
  • Steve Morlidge - coauthor of Future Ready: How to Master Business Forecasting and former Finance Director at Unilever: Forecasting Myths - and How They Can Damage Your Health.
  • Wallace DeMent - Demand Planning Manage at Pepsi Bottling, responsible for forecast, financial and data analysis: Avoiding Dangers in Sales Force Input to the Forecasts.
  • Anne Robinson - Executive Director, Supply Chain Strategy and Analytics, Verizon Wireless and 2014 President of the Institute for Operations Research and the Management Sciences (INFORMS): Forecasting and Inventory Optimization: A Perfect Match Except When Done Wrong.
  • Erin Marchant - Lead Data and Systems Management Analyst, Moen: Worst Practices in Forecasting Software Implementation.
  • Fotios Petropoulos - Assistant Professor, Cardiff University: The Bad and The Good in Software Practices for Judgmental Selection of Forecast Models.

Take advantage of the Early Bird registration and save $200 through June 30.

Post a Comment

Rob Hyndman on Time-Series Cross-Validation

SAS Viya logoSometimes one's job gets in the way of one's blogging. My last three months have been occupied with the launch of SAS® Viya™, our next-generation high-performance and visualization architecture.

Please take the time to find more information on the SAS Viya website, and apply for a free preview.

Rob Hyndman on Time-Series Cross-Validation

Back in March we looked at Rob Hyndman's article on "Measuring Forecast Accuracy" that appears in the new book Business Forecasting: Practical Problems and Solutions.

Hyndman discussed the use of training and test datasets to evaluate performance of a forecasting model, and we showed the method of time-series cross-validation for one-step ahead forecasts.

Training data and Test dataTime-series cross-validation is used when there isn't enough historical data to hold out a sufficient amount of test data. (We typically want 20% of the history (the most recent observations) for the test data -- and at least enough to cover the desired forecasting horizon.)

In real life, we often have to forecast more than just one step ahead. For example, if there is nothing we can do to impact supply or demand over the next two weeks, then the 3-week ahead forecast is of interest.

Hyndman shows how the time-series cross-validation procedure based on a rolling forecast origin can be modified to allow multistep errors to be used.

Suppose we have a total of T observations (e.g., 48 months of historical sales), and we need k observations to produce a reliable forecast. (In this case, k might be 36 months for our training dataset, with the most recent 12 months as our test dataset.) If we want models that produce good h-step-ahead forecasts, the procedure is:

  1. Select the observations at time k + h + i - 1 for the test set, and use the observations at times 1, 2, ..., k + i - 1 to estimate the forecasting model. Compute the h-step error on the forecast for time k + h + i - 1.
  2. Repeat the above step for i = 1, 2, ..., T - k - h + 1 where T is the total number of observations.
  3. Compute the forecast accuracy measures based on the errors obtained. When h = 1, this gives the same procedure as the one-step ahead forecast in the previous blog.

Previously, my colleague Udo Sglavo has shown how to implement Hyndman's method in SAS® Forecast Server in this two-part blog post (Part 1, Part 2).

Post a Comment

Guest blogger: Len Tashman previews spring issue of Foresight

Editor Len Tashman's Preview of the Spring Issue of Foresight

Image of Len Tashman

Len Tashman Editor-in-Chief

Misbehaving, the feature section of this 41st issue of Foresight, was prompted by the publication of Richard Thaler’s eye-opening book of the same title, a work that explains the often surprising gap between (a) the models we use and organizational policies derived from them, and (b) the outcomes we just didn’t expect. Models assume that economic actors (owner, managers, consumers) behave rationally, working purposefully toward a clearly defined goal. But actors misbehave, their actions sidetracked by seemingly irrelevant factors, and their self-serving interests can undermine organizational efforts.

Our examination of misbehaving begins with my review of Thaler’s book and description of the potential insights for the forecasting profession. The review is followed by some brief reflections from Foresight’s editors on the problems of misbehaving in their particular branches of our field and what we can hope to do about these misbehaviors.

  • Paul Goodwin, Hot New Research Editor, discusses Misbehaving Agents, those organization forecasters and planners who subvert organizational goals.
  • Roy Batchelor, Financial Forecasting Editor, examines Misbehavior in Forecasting Financial Markets.
  • John Mello, Supply Chain Forecasting Editor, describes policies for Eliminating Sales-Forecasting Misbehavior.
  • Lastly, Fotios Petropoulos, FSS Forecasting Editor, and colleague Kostas Nikolopoulos reveal Misbehaving, Misdesigning, and Miscommunicating in our forecasting support systems.

Although not usually described as misbehaving, overreliance on spreadsheets as forecasting support tools has been a continuing problem. Henry Canitz takes companies to task for use of primitive tools in Overcoming Barriers to Improving Forecasting Capabilities. And Nari Viswanathan comments on the limitations of traditional S&OP technology tools.

Continuing a series of articles on forecast-accuracy metrics, Steve Morlidge proposes several creative graphical and tabular displays for Using Error Analysis to Improve Forecast Performance.

Our Forecaster in the Field interview is with Mark Blessington, noted sales-forecasting author, whose article “Sales Quota Accuracy and Forecasting” appeared in our Winter 2016 issue.

Wrapping up the Spring 2016 issue is the first in a series of commentaries examining the gap between forecasting research and forecasting practice. In Forecasting: Academia vs. Business, Sujit Singh argues that while much research has been devoted to the statistical and behavioral basis of forecasting as well as to the application of big data, missing is research relevant to business performance. By measuring the benefits of good forecasting, this type of research can provide the most value to the business user.

Upcoming in the next issue of Foresight are articles on how to improve safety-stock calculations, the need for more structured communications in S&OP, and a feature section on the issue of the gap between forecasting research in academia and the forecasting research that business would like to see.

Foresight Practitioner Conference: Worst Practices in Forecasting

Registration has begun for the 2016 Foresight Practitioner Conference, Worst Practices in Forecasting: Today’s Mistakes to Tomorrow’s Breakthroughs, which will be held October 5-6 at North Carolina State University in Raleigh in partnership with their Institute for Advanced Analytics.

See the conference announcement – and take advantage of the very generous registration fees for Foresight readers who sign up early!

Post a Comment
  • About this blog

    Michael Gilliland is a longtime business forecasting practitioner and currently Product Marketing Manager for SAS Forecasting. He initiated The Business Forecasting Deal to help expose the seamy underbelly of the forecasting practice, and to provide practical solutions to its most vexing problems.

    Mike is also the author of The Business Forecasting Deal, and co-editor of Business Forecasting: Practical Problems and Solutions. He also edits the Forecasting Practice section of Foresight: The International Journal of Applied Forecasting.
  • Subscribe to this blog

    Enter your email address:

    Other subscription options

  • Archives