Rob Hyndman on measuring forecast accuracy

Business Forecasting book coverThe new book Business Forecasting: Practical Problems and Solutions contains a large section of recent articles on forecasting performance evaluation and reporting. Among the contributing authors is Rob Hyndman, Professor of Statistics at Monash University in Australia.

To anyone needing an introduction, Hyndman's credentials include:

Editor-in-chief of International Journal of Forecasting

Board of Directors of the International Institute of Forecasters

Co-author (with Makridakis and Wheelwright) of the classic textbook Forecasting: Methods and Applications (3rd edition)

Drawing from his online textbook Forecasting: Principles and Practice (coauthored with George Athanasopoulos), Hyndman explains the use of Training and Test sets for measuring forecast accuracy.

Rob Hyndman on Measuring Forecast Accuracy

Photo of Rob Hyndman

Rob Hyndman

A common "worst practice" is to select forecasting models based solely on their fit to the history that was used to construct them. As Hyndman points out, "A model that fits the data well does not necessarily forecast well."

Unscrupulous consultants or forecasting software vendors can wow impressionable customers with models that closely (or even perfectly) fit their history. Yet fit to history provides little indication of how well the model will actually forecast the future.

Hyndman's article discusses scale-dependent errors (e.g., MAE and RMSE), percentage errors (MAPE), and scaled errors (MASE), and the situations in which each are appropriate. Note that scaled errors are a relatively new type of forecasting performance metric, first proposed by Hyndman and Koehler in 2006. [Find more details in their article: Another look at measures of forecast accuracy. International Journal of Forecasting 22(4), 679-688.]

To avoid the worst practice mentioned above, Hyndman suggests dividing history into "training data" (used to estimate the model) and "test data" (used to evaluate forecasts generated by the model). The test data is often referred to as a "hold out sample," and this is a well-recognized approach.

Training data and Test data

 

 

When there is sufficient history, about 20% of the observations (the most recent) should be "held out" to serve as test data. The test data should be at least as large as the maximum forecast horizon required. So hold out 12 months (or 52 weeks) if you have to forecast one year out.

Unfortunately, we often don't have enough historical data for the recommended amount of test data. So for shorter time series, Hyndman illustrates the method of time series cross-validation, in which a series of training and test sets are used.

Time Series Cross-Validation

This approach uses many different training sets, each one containing one more observation then the previous one. Suppose you want to evaluate the one-step ahead forecasts.

Evaluating on-step ahead forecasts

As we see in this diagram, each training set (black dots) contains one more observation than the previous one, and each test set (gray dots) contains the next observation. The light gray dots are ignored. Per Hyndman:

Suppose k observations are required to produce a reliable forecast. Then the process works as follows.

  1. Select the observation at time k+i for the test set, and use the observations at times 1, 2,…, k+i − 1 to estimate the forecasting model. Compute the error on the forecast for time k+i.
  2. Repeat the above step for i = 1, 2,…, Tk where T is the total number of observations.
  3. Compute the forecast accuracy measures based on the errors obtained.

So if you had T=48 months of historical data and used the first 36 as your test set, then you would have 12 one-step ahead forecasts over which to compute your model's accuracy.

In the next post, we'll look at the more general (and more common) situation where you are interested in models that produce good h-step-ahead forecasts (for example, for 3 months ahead).

Post a Comment

Model interpretability in forecasting

"The Role of Model Interpretability in Data Science" is a recent post on Medium.com by Carl Anderson, Director of Data Science at the fashion eyeware company Warby Parker. Anderson argues that data scientists should be willing to make small sacrifices in model quality in order to deliver a model that is easier to interpret and explain, and is therefore more acceptable to management.

Can we make this same argument in business forecasting?

What is Meant by Model Interpretability?

A model that is readily understood by humans is said to be interpretable.

An example is a forecast based on the 10-week moving average of sales. Management may not agree with the forecast that is produced, but at least everyone understands how it was calculated. And they may make forecast overrides based on their belief that next week's sales will be higher (or lower) than the recent 10-week average.

A trend line model is also easily understood by management. While the mathematical calculations are more complicated, everyone understands the basic concept that "the current growth (or decline) trend will continue."

Anderson observes that an interpretable model is not necessarily less complex than an uninterpretable model.

He uses the example of a principal components model he created to predict new product demand. But principal components is not something readily explainable to his business audience. So he ended up creating a more complex model (i.e., having more variables) that had no additional predictive power, but could be understood by his users.

...the idea of my model is to serve as an additional voice to help them [demand planners] make their decisions. However, they need to trust and understand it, and therein lies the rub. They don’t know what principal component means. It is a very abstract concept. I can’t point to a pair of glasses and show them what it represents because it doesn’t exist like that. However, by restricting the model to actual physical features, features they know very well, they could indeed understand and trust the model. This final model had a very similar prediction error profile — i.e., the model was basically just as good — and it yielded some surprising insights for them.

Increasing model complexity (for no improvement in model performance) is antithetical to everything we are taught about doing science. Whether referred to as Occam's Razor, the Principle of Parsimony, or something else, there is always a strong preference for simpler models.

But Anderson makes a very good point -- a point particularly well taken for business forecasters. We tend to work in a highly politicized environment. Management already has an inclination to override our model-generated forecasts with whatever they please. If our numbers are coming out of an inscrutable black box, they may be even less inclined to trust our work.

Anderson concludes with reasons why a poorer/more complex, but interpretable, model may be favored:

  • Interpretable models can be understand by business decision makers, making them more likely to be trusted and used.
  • Interpretable models may yield insights.
  • As interpretable models build trust in the model builder, this may allow more sophisticated approaches in the future.
  • As long as the interpretable model performs similar enough to the better (but uninterpretable) model, you aren't losing much.

While this may not be true in all areas of predictive analytics, a curious fact is that simpler models tend to do better at forecasting. And even a model as simple as the naïve "no change" model performed better than half of the forecasts in a Steve Morlidge study reported in Foresight.

So thankfully, interpretability and performance are not always at odds. But it remains a challenge to keep participants from overriding the forecast with their biases and personal agendas, and just making it worse.

 

Post a Comment

Guest blogger: Len Tashman previews Winter issue of Foresight

Editor Len Tashman's preview of the Winter 2016 issue of Foresight

Image of Len Tashman

Len Tashman Editor-in-Chief of Foresight

This 40th issue of Foresight begins with a review of the new book by Philip Tetlock and Dan Gardner with the enticing title Superforecasting: The Art and Science of Prediction. Reviewer Steve Morlidge explains that

…the “superforecasters” of the title are those individuals who consistently outperformed other members of Tetlock’s team, and the book sets out to answer the question, “What makes these people so effective as forecasters?”

Perhaps no issue has received more attention in the forecasting literature than that of the relative merits of simple vs. complex forecasting methods. Although the definitions of simple and complex have varied, many studies report evidence that simple methods are often as accurate – and occasionally more accurate – than more complex counterparts.

Now, the two articles in our section on Forecasting Principles and Methods help us to better understand why simplicity in method choice can be a virtue.

In our first piece, Bias-Variance Trade-offs in Demand Forecasting, Konstantinos Katsikopoulos and Aris Syntetos begin by illustrating how a forecast-accuracy metric can be decomposed into two distinct attributes: bias, or a tendency to systematically over- or under-forecast; and variance, the magnitude of fluctuation in the forecasts. They then illustrate that

…simple methods tend to have large bias but lower variance, while complex methods have the opposite tendency: small bias but large variance. So we might prefer a method with smaller variance even if it has larger bias: that is, we can reduce the (total error) by replacing an unbiased but high-variance forecast method with a biased but low-variance forecast method. More generally, we should seek the right amount of complexity.

Stephan Kolassa, for his article in this section, not only endorses their argument but goes a step further to show why Sometimes It’s Better to Be Simple than Correct:

…correct models—those that include all the important demand-influencing factors—can yield bad forecasts. What’s surprising is that a correct model can yield systematically worse forecasts than a simpler, incorrect model!

The bottom line is that complexity carries dangers, and so it is particularly unwise to tweak out a small increase in model performance—a practice to which we often succumb.

Foresight has devoted much space in the past five years to descriptions and evaluations of efforts to promote Collaborative Forecasting and Planning, such as sales and operations planning and information sharing across supply-chain partners. But many firms report they’re not satisfied with the results, that the integration they seek across functional areas—forecasting, sales, marketing, operations, finance—has not occurred, and that the often expensive systems they’ve installed have not overcome the functional silos that impede achievement of promoting company-wide objectives. Dean Sorensen draws upon his decades of experience in advising firms on integrated planning to offer an explanation for this corporate dissatisfaction. He observes that

…as (organizational) complexity rises, capability gaps are exposed in processes that are supported by separate S&OP, financial planning, budgeting, and forecasting applications. What’s missing is a planning and forecasting process that breaks down functional silos by integrating strategic, financial, and operational processes, and extending beyond manufacturing to broader supply chain, selling, general, and administrative activities.

New, integrative technologies are emerging, and Dean describes how these technology innovations provide incremental capabilities that stand-alone S&OP and financial planning, budgeting, and forecasting applications do not.

Dean discusses his experience in integrative planning in our Forecaster in the Field interview.

In our section on Forecasting Practice, sales-forecasting specialist Mark Blessington challenges the conventional systems for setting sales quotas, which are based on annual business plans with sales targets for the corporation and its divisions and territories. In Sales Quota Accuracy and Forecasting, he reports evidence that

…quotas are better set on a quarterly rather than annual basis, and quarterly exponential smoothing methods yield far more accurate quotas than traditional quota-setting methods. However, firms must anticipate implementation barriers in converting from annual to quarterly quotas, as well as the possibility that sales representatives may try to game the system by delaying sales orders to maximize future bonus payouts.

Our Strategic Forecasting section addresses forces that transcend short-term forecasts to look beyond the current planning horizons. TechCast Global principals William Halal and Owen Davies present TechCast’s Top Ten Forecasts of technological innovations and social trends over the next 15 years and beyond.

They see several disruptive technological developments:

(1) Artificially intelligent machines will take over 30% of routine mental tasks. (2) Major nations will take firm steps to limit climate-change damage. (3) Intelligent cars will make up 15% of vehicles in less than 10 years. (4) The “Internet of Things” will expand rapidly to connect 30% of human artifacts soon after 2020.

Tune to page 50 of this issue for the rest of their top ten.

Post a Comment

New book: Business Forecasting

Business Forecasting book coverAnnouncing New Book: Business Forecasting

Just in time for the new year, Business Forecasting: Practical Problems and Solutions compiles the field's most important and thought provoking new literature into a single comprehensive reference for the business forecaster.

So says the marketing literature.

The real story? The book does pretty much that.

With my co-editors Len Tashman (editor-in-chief of Foresight: The International Journal of Applied Forecasting) and Udo Sglavo (Sr. Director of Predictive Modeling R&D at SAS), we've assembled 49 articles from many of our favorite, and most influential authors of the last 15 years.

As we state in the opening commentary of chapter 1:

Challenges in business forecasting, such as increasing accuracy and reducing bias, are best met through effective management of the forecasting process. Effective management, we believe, requires an understanding of the realities, limitations, and principles fundamental to the process. When management lacks a grasp of basic concepts like randomness, variation, uncertainty, and forecastability, the organization is apt to squander time and resources on expensive and unsuccessful fixes. There are few endeavors where so much money has been spent, with so little payback.

Through the articles in this collection, and the accompanying commentary, we facilitate exploration of these basic concepts, and exploration of the realities, limitations, and principles of the business forecasting process. Throughout 2016 this blog will highlight many of the included articles.

The four main sections cover:

  • Fundamental Considerations in Business Forecasting
  • Methods of Statistical Forecasting
  • Forecasting Performance Evaluation and Reporting
  • Process and Politics of Business Forecasting

The book is now available from the usual suspects (which includes anywhere that Pulitzer (if not Nobel) quality literature is sold):

The publisher has graciously provided a free sample chapter, so please take a look.

Post a Comment

Worst practices in forecasting conference (October 5-6, 2016)

In conjunction with the International Institute of Forecasters and the Institute for Advanced Analytics at North Carolina State University, the 2016 Foresight Practitioner conference will be held in Raleigh, NC (October 5-6, 2016) with the theme of:

Worst Practices in Forecasting:

Today's Mistakes to Tomorrow's Breakthroughs

This is the first ever conference dedicated entirely to the exposition of bad forecasting practices. I am co-chairing the event along with Len Tashman, editor-in-chief of Foresight. Our "worst practices" theme reflects an essential principle:

The greatest leap forward for our forecasting functions lies not in squeezing an extra trickle of accuracy from our methods and procedures, but rather in recognizing and eliminating practices that do more harm than good.

As discussed so many times in this blog, we often shoot ourselves in the foot when it comes to forecasting. We spend vast amounts of time and money building elaborate systems and processes, while almost invariably failing to achieve the level of accuracy desired. Organizational politics and personal agendas contaminate what should be an objective, dispassionate, and largely automated process.

At this conference you'll learn what bad practices to look for at your organization, and how to address them. I'll be delivering the introductory keynote: Worst Practices in Forecasting, and the rest of the speaker lineup already includes:

  • Len Tashman - Foresight Founding Editor and Director of the Center for Business Forecasting: Forecast Accuracy Measurement: Pitfalls to Avoid, Practices to Adopt.
  • Paul Goodwin, coauthor of Decision Analysis for Management Judgment and Professor Emeritus of Management Science, University of Bath: Use and Abuse of Judgmental Overrides to Statistical Forecasts.
  • Chris Gray - coauthor of Sales & Operations Planning - Best Practices: Lessons Learned and principal of Partners for Excellence: Worst Practices in S&OP and Demand Planning.
  • Steve Morlidge - coauthor of Future Ready: How to Master Business Forecasting and former Finance Director at Unilever: Assessing Forecastability and Properly Benchmarking.
  • Wallace DeMent - Demand Planning Manage at Pepsi Bottling, responsible for forecast, financial and data analysis: Avoiding Dangers in Sales Force Input to the Forecasts.
  • Anne Robinson - Executive Director, Supply Chain Strategy and Analytics, Verizon Wireless and 2014 President of the Institute for Operations Research and the Management Sciences (INFORMS): Forecasting and Inventory Optimization: A Perfect Match.
  • Erin Marchant - Senior Analyst in Global Demand Management, Moen: Worst Practices in Forecasting Software Implementation.

Stay tuned to the Foresight website for forthcoming specifics on registration, sponsorship, and a detailed agenda.

About the Conference Hosts

Worst Practices in Forecasting Conference will be held at the Institute for Advanced Analytics logoFounded in 2007, the Institute for Advanced Analytics at North Carolina State University prepares practitioners of analytics for leadership roles in our digital world. Its flagship program is the Master of Science in Analytics (MSA), which covers a wide spectrum of skills including data management and quality, mathematical and statistical methods for data modeling, and techniques for visualizing data in support of enterprise-wide decision making.

Foresight logoNow celebrating its 10th anniversary of publication, Foresight: The International Journal of Applied Forecasting is the practitioner journal of the nonprofit International Institute of Forecasters (IIF), a cross-disciplinary institute including analysts, planners, managers, scholars, and students across business, economics, statistics, and other related fields. Through its journals and conferences, the IIF seeks to advance and improve the practice of forecasting.

 

Post a Comment

Why forecasting is important (even if not highly accurate)

"Why forecasting is important" gets searched over 100 times monthly on Google. Search results include plenty of rah-rah articles touting the obvious benefits of an "accurate forecast," but are of little help in the real life business world where high levels of forecast accuracy are usually not achieved. This is a troubling disconnect.

When can we forecast accurately?

photo of sunrise

Why forecasting is important? So we know when to get up in the morning.

We can forecast the exact time of sunrise tomorrow, and indefinitely into the future.

Why is this? Because we have good observational data (the relative positions and velocities of the sun and planets in our solar system). And we seem to have a pretty good understanding of the laws of physics that guide the behavior being forecast.

[For some interesting background, see How is the time of sunrise calculated?]

Climate and weather are guided by the same laws of physics (and also chemistry) that we seem to have a pretty good understanding of. Yet we can't expect to forecast long term climate (or even near term weather) as accurately as sunrise for a number of reasons:

  • photo of lightning

    Why forecasting is important? So we avoid lightning.

    The climate is a much more complex system. It is not nearly so easy to model as a dozen or so heavenly bodies hurtling through space.

  • Our observational data is limited to a finite number of monitors and weather stations. So there are gaps in our knowledge of the complete state of the system at any given time. (We cannot monitor every single point of land, sea, and atmosphere.)

In addition, the assumptions upon which a long term forecast is based can change over time. There may be a change in decisions or actions in response to the original forecast. [Snurre Jensen examines this phenomenon in "When poor forecast accuracy is a good thing."] For example:

  • A business increases advertising when the demand forecast falls short of plan.
  • Politicians take action to reduce CO2 levels based on a forecast of warming.

[Despite the implausibility of the second example, I want to commend Nate Silver's The Signal and the Noise for its extremely informative chapter on climate forecasting (and on forecasting in general).]

Why forecasting is important

Highly accurate forecasting, while always desirable, is rarely necessary. In fact, if your organizational processes NEED highly accurate forecasts to function properly, I suspect they don't function very well.

As long as forecasting can get you "in the ballpark," and thereby improve your decision making, it has demonstrated its value. Remember the objective:

To generate forecasts as accurate and unbiased as can reasonably be expected -- and to do this as efficiently as possible.

Why forecasting is important is that it at least gives us a chance for a better future.

Post a Comment

Know your forecastability (and maybe save your job)

Larry Lapide receiving award

Larry Lapides receives 2012 Lifetime Achievement Award from Anish Jain of the IBF

Journal of Business Forecasting columnist Larry Lapide is a longtime favorite of mine. As an industry analyst at AMR, and more recently as an MIT Research Affiliate, Larry's quarterly column is a perpetual source of guidance for the practicing business forecaster. No wonder he received IBF's 2012 Lifetime Achievement in Business Forecasting award.

In the Fall 2015 issue, Larry takes another look at the hot topic of "forecastability" -- something he first touched on in his Winter 1998/99 column "Forecasting is About Understanding Variations."

In the earlier article, Larry introduced the metric Percent of Variation Explained (PVE), where

PVE = 100 x (1 - MAPE/MAPV)

In this formula, the Mean Absolute Percent Variation (MAPV) is simply the MAPE that would have been achieved had you forecast the mean demand every period. So when you compare the MAPE of your real forecasts to the MAPV, this provides an indication of whether you have "added value" by forecasting better than just using the mean.

In spirit this approach is very similar to Forecast Value Added (FVA) analysis (which compares your forecasting performance to a naive model). As I discussed in The Business Forecasting Deal (the book), PVE is analogous to conducting FVA analysis over some time frame and using the mean demand over that time frame as the naive or "placebo" forecast.

The benefit of Lapide's approach is that it provides a quick and easy way to answer a question like "Would we have forecasted better [last year] by just using the year's average weekly demand as our forecast each week?" However, this method is not meant to replace full and proper FVA analysis because it does not make a fair comparison; the forecaster does not know in advance what mean demand for the year is going to be. Mean demand is not an appropriate placebo forecast for FVA analysis because we don't know until the year is over what mean demand turns out to be. This violates a principle for selection of the placebo, that it should be a legitimate forecasting method that the organization could use. (p. 108)

In short, we could never use the mean demand for the year as our real-life operating forecast, because we don't know what the mean demand is until the year is over!

Larry's Fall 2015 JBF column looks at how forecastability can be used to segment an organization's product portfolio, and guide the efforts of the forecaster or planner. A number of different segmentation schemes have been proposed, for example:

SAS forecasting customers may also wish to view the webinar Time Series Segmentation by Jessica Curtis of SAS, to see how to segment time series using SAS Forecast Server Client.

Why Knowing Forecastability Might Save Your Job

Management is fond of handing out performance goals, and inappropriate goals can get a forecaster in a lot of trouble. So it is essential for the forecaster to understand what forecast accuracy is reasonable to expect for any given demand pattern, and be able to push back when necessary.

Lapide argues that "sustained credibility" is the most important part of a forecaster's job review. This means management is willing to trust your analysis and judgment, that you are delivering the most accurate forecast that can reasonably be expected, even if the accuracy is not as high as they would like.

Being able to explain what is reasonable to expect -- even it is not what management wants to hear -- can establish that credibility.

(For more information, see 5 Steps to Setting Forecasting Performance Objectives.)

 

 

Post a Comment

FOBFD Greg Fishel makes headlines on climate change

WRAL Chief Meteorologist (and Friend Of the Business Forecasting Deal) Greg Fishel garnered national attention recently with a thoughtful (yet to some, provocative) blog post on climate change.

Greg Fishel

Greg Fishel with Fan Club on 2014 visit to SAS

In the post, Fishel chronicled his evolving thought on the subject. He argued for an end to the political partisanship that stifles meaningful discussion. And he appealed for us to approach the issue through science rather than ideology.

(May I have an "Amen"?)

Attention to the blog blew up when the Washington Post Capital Weather Gang covered it. And he has since  been on the cover of the local Indy Week newspaper, and interviewed on NPR.

The Fallout

Fishel's follow-up Facebook posting garnered some choice comments, as might be expected when one takes a stand against dogma and fanaticism. My favorite of which accuses him of holding "Communist views."

(Readers, I know Greg Fishel. I have eaten lunch with Greg Fishel. Greg Fishel is a friend of mine. Greg Fishel is no Communist.)

Yet, encouragingly, there were far more words of support for reason, science, and the pursuit of truth. (Some even called for Fishel himself to enter politics -- although, in the opinion of this blogger, that would be a considerable step down from his current position at WRAL.)

When it comes to understanding complex systems, like the earth's climate -- or the impact of a promotion on the demand for our products, the reality is we may never know for certain. So with science, and critical thought, and by taking an analytical approach to our problems, we are forced into a position of humility (instead of a position of OxyContin induced bellicosity).

With science (in contrast to religion or partisan politics), no supposition can be taken as fact. Instead, we must constantly refine, test, and assess our beliefs. That's how we make progress. And that's how, through science, we enjoy such wonderful technological advances as the Avocado Saver and the Car Exhaust Grill. (The latter of which, by the way, makes practical use of an automobile's greenhouse gas emissions.)

Post a Comment

Highlights from IBF Orlando

Last week I had the pleasure of attending (with six of my SAS colleagues) the IBF's Best Practices Forecasting Conference in Orlando. Some of the highlights:

Worst Practices RoundtableIn addition to the SCB interviews, Charlie and I hosted roundtable discussions, and delivered regular sessions (mine co-presented with Erin Marchant of Moen).

  • The "worst practices" roundtable had participants confessing their forecasting sins -- or at least reporting ones committed by "friends" or colleagues.

Erin Marchant did a fabulous job at our "Applying Forecast Value Added at Moen" presentation. While they are still early in their FVA efforts, she shared some extremely valuable insights from Moen's journey:

  • FVA analysis takes the emotion out of forecast discussions
    • Data now supports the conversation about how to improve the process
  • Sometimes you can't do better than the naive forecast
    • FVA helps you focus your forecast improvement efforts in the right places
  • FVA is a measure of your forecast process
    • NOT an "us vs. them" proposition
    • NOT a catapult to throw others "under the bus"
    • NOT an excuse to create the sales forecast in a silo
  • FVA starts a conversation that improves your forecasting process!

Erin also noted that because of FVA:

  • We are able to better pinpoint the causes of forecast error and start dialogues about those parts of our process.
  • We are able to begin conversations about our supply chain structure that could lead to more accurate demand signals and better service for our customers.

Learn more in Erin's prior interview on the IBF blog, and in her forthcoming SupplyChainBrain interview.

Receiving pillow from Eric Wilson of Tempur SealySpecial Thanks

Thanks to Eric Wilson of Tempur Sealy and the folks at Arkieva, who contributed the Tempur-Cloud® Breeze Dual Cooling Pillow that I won in the raffle at Eric's session.

My long-term forecast is for good sleeping.

Stephanie MurrayAnd as always, IBF's Director of Strategic Relationships, Partnerships, Memberships, & Events, Stephanie Murray gave SAS the royal treatment. For me personally, this included a stash of M&Ms (brown ones removed per contract), and mini-Tobasco sauce with every meal.

Stephanie's energy and behind-the-scenes organization are a huge (and under-appreciated) part of the success of these events.

 

Post a Comment

Probabilistic load forecasting competition

Dr. Tao Hong

Dr. Tao Hong

So you think you know how to forecast?

Now is your chance to prove it, by participating in a probabilistic load forecasting competition run by my friend (and former SAS colleague), Dr. Tao Hong.

Currently a professor at UNC Charlotte and director of the Big Data Energy Analytics Laboratory (BigDEAL), Tao is opening his class competition to students and professionals outside of his Energy Analytics course. See his October 12 Energy Forecasting blog for information, including these competition rules:

  • The competition will start on 10/22/2015, and end on 11/25/2015.
  • The historical data will be released on 10/22/2015.
  • The year-ahead hourly probabilistic load forecast is due on 11:45am ET each Wednesday starting from 10/28/2015.
  • The exam is individual effort. Each student form a single-person team. No collaboration is allowed.
  • The student can not use any data other than what's provided by Dr. Tao Hong and the U.S. federal holidays.
  • Pinball loss function is the error measure in this competition.
  • The benchmark will be provided by Dr. Tao Hong. A student receive no credit if not beating the benchmark nor ranking top 6 in the class.
  • No late submission is allowed.

If you are interested in joining the competition, please email Dr. Tao Hong (at hongtao01@gmail.com) for more details.

Post a Comment
  • About this blog

    Michael Gilliland is a longtime business forecasting practitioner and currently Product Marketing Manager for SAS Forecasting. He initiated The Business Forecasting Deal to help expose the seamy underbelly of the forecasting practice, and to provide practical solutions to its most vexing problems.

    Mike is also the author of The Business Forecasting Deal, and co-editor of Business Forecasting: Practical Problems and Solutions. He also edits the Forecasting Practice section of Foresight: The International Journal of Applied Forecasting.
  • Subscribe to this blog

    Enter your email address:

    Other subscription options

  • Archives