Brilliant forecasting article from 1957!!! (Part 3)

This isn't such a brilliant article because we learn something new from it -- we really don't. But it is amazing to find, from someone in 1957, such a clear discussion of forecasting issues that still plague us today. If you can get past some of the Mad Men era words and phrasing, the article is wonderfully written and a fun read -- full of sarcastic digs at the forecasting practice.

In this final installment we'll look at Lorie's handling of forecasting performance evaluation.

Problem 2: The Evaluation of Forecasts

Lorie states there are two main problems in evaluating forecasting:

  1. Determining accuracy.
  2. Determining economic usefulness.

To solve these, he suggests three principles:

A. The Superiority of Written Forecasts

When forecasts are not recorded, the usual consequence is that they "seem to become more and more accurate as they recede into the past where memory is inexact and usually comforting." But even when written down, there is danger of ambiguity.

Lorie takes special aim at financial analysts and economic forecasters, who find it "distressingly easy" to use broad designations like "markets" or "business activity" or "sales." Of course, without a rigorous operational definition of such terms, the accuracy of the forecasts cannot be judged. "Their usefulness, however, can; their usefulness is negligible."

Lorie's position is largely in line with Nate Silver's recent critique of economic forecasting as an "almost complete failure."

In addition to recording forecasts in a way specific enough to be measured (typically product, location, time period, units), Lorie argues for recording the method used to generate the forecast:

The absence of a record of the forecasting method makes it extremely difficult to judge what has been successful and what unsuccessful among the techniques for peering into the future.

By method, I will interpret this to mean, at a high level, what forecasting process was used. For example,

STATISTICAL FORECAST ==> ANALYST OVERRIDE ==> CONSENSUS OVERRIDE

Over time we can determine whether these individual steps are making the forecast any better (or worse) than using a simple naïve model.

B. The Statistical Evaluation of Forecasting Techniques

Today there is growing recognition that relative metrics of forecasting performance are much more relevant and useful than the traditional accuracy or error metric by itself.

For example, to be told "MAPE=30%" is only mildly interesting. By itself, MAPE gives no indication of how easy or difficult a series is to forecast. It doesn't tell us what error would be reasonable to expect for the given series, and consequently, does not tell us whether our forecasting efforts were good or bad.

It is only by viewing the MAPE in comparison to some baseline of performance (e.g., the MAPE of a naïve forecast), that we can determine the "value added" by our forecasting efforts. This is what relative metrics such as FVA let you do.

Lorie gives an example: Each day the weather forecaster in St. Petersburg, Florida can forecast the following day's weather to be clear and sunny, and by doing nothing will be correct 95% of the time. The forecaster in Chicago, even using the latest technology and most sophisticated methods, will only be get the following day's forecast right 80% of the time. So does this mean the St. Petersburg forecaster is more skilled at his profession than the Chicago forecaster? Of course not!

If there is a point to the preceding example, it is that the statistical evaluation of forecasting techniques must take account of the variability of the series being forecast...the forecasting task in Chicago is much more difficult.

What is desired is measurement of the "marginal" contribution of the forecasting technique. What is desired is an indication of the extent to which one can forecast better because of the use of the forecasting technique than would be possible by sole reliance on some simple, cheap, and objective forecasting device.

Lorie has provided an almost perfect description of FVA analysis. In essence, it is nothing more than the application of basic scientific method to the evaluation of a forecasting process.

C. The Economic Evaluation of Forecasts

There can be asymmetry in the cost of our business decisions, that is clearly true. For example, it makes sense to carry excess inventory on an item which costs us little to make and hold, yet yields huge revenue when sold. (Carrying too little inventory might save us a little on cost, yet we'd miss a lot of revenue on lost sales.)

Lorie asserts:

A forecasting technique is to judged to be superior to alternatives according to an economic evaluation if the consequences of decisions based upon it are more profitable than decisions based upon the alternatives.

This seems to be saying that it is ok to bias your forecasts in a direction that is more economically favorable, but I disagree. While it is appropriate to bias your plans and actions in a way that will provide a more favorable economic outcome (as in the example above), I would contend that the forecast should remain an "unbiased best guess" at what is really going to happen in the future.

I'm not convinced there can be an economic evaluation of the forecast. (Evaluating a forecast solely on accuracy, bias, and FVA may be sufficient). However, there should be an economic evaluation of the decision that was made.

Post a Comment

Brilliant forecasting article from 1957!!! (Part 2)

Combining Statistical Analysis with Subjective Judgment (continued)

After summarily dismissing regression analysis and correlation analysis as panaceas for the business forecasting problem, Lorie turns next to "salesmen's forecasts."* He first echoes the assumption that we still hear today:

This technique of sales forecasting has much to commend it. It is based upon a systematic collection and analysis of the opinions of men who, among all the company's employees, are in closest contact with dealers and ultimate consumers.

But Lorie points out the "inherent deficiencies" of relying solely on sales force input, "for which it may be impossible to devise effective remedies." These are:

  • Unreasonably assumes that sales people have the "breadth and depth of understanding" of the pervasive influences on demand. (Do they have any skill at forecasting?)
  • Sales jobs turn over frequently, so sales people providing forecasts are often inexperienced, and we don't have enough data to determine their biases. (Will they give us an honest answer?)
  • Does not incorporate "competent statistical analysis" of historical sales data which could be combined with the sales force inputs.

Lorie also disses the use of consumer surveys as costly, impractical, and unproven to be of value except in limited circumstances.

Two Solutions

The message is not all negative. Lorie provides two solutions to combing statistics with judgment, the filter technique and the skeptic's technique. I'm not as much interested in the specific techniques as in his overall approach to the problem -- which in the filter technique is to focus on economy of process. Start "with an extremely simple and cheap process to which additional time and money are devoted only up to the point at which the process becomes satisfactory."

...the process provides an objective record of both sales forecasts and the methods by which they are made so that study of this record can be a means for continual improvement in the forecasting process.

(You can find details about the filter technique in the article.)

The skeptic's technique applies process control ideas, akin to Joseph & Finney's "Using Process Behaviour Charts to Improve Forecasting and Decision-Making" (Foresight 31 (Fall 2013), pp. 41-48). Starting with "limited faith" in the persistence of historical forces that affect sales:

  • Project future sales with a simple trend line.
  • Compute two standard deviations on each side of the line to create a range within which future sales should fall the vast majority of time (if historical forces continue to work in the same way).

Lorie points out that this work could be done by statistical clerks "whose rate of pay is substantially less than that of barbers or plumbers."

  • The forecaster then solicits company experts (who, "incidentally, usually receive substantially more than barbers or even plumbers").
  • If the expert's forecasts falls within the range limits of the statistical forecast, it is accepted. If outside the limits, even after reconsideration (asking "the gods for another omen"), the forecaster has to make a decision what to do.

Lorie wryly points out that making a decision is something the forecaster has avoided up to this point.

For expert forecasts outside the statistical forecast limits, Lorie states:

...experience has indicated that the forecast in a vast majority of cases would have been more accurate if the experts' forecast had arbitrarily been moved to the nearest control limit provided by the statistical clerk rather than being accepted as it was.

In Part 3 we'll look at Lorie's remarks on the evaluation of forecasts -- and his 1957 precursor to what we now call FVA!

---------------

*The role of the sales force in forecasting is subject of my recent Foresight article (Fall 2014), and a forthcoming presentation at the International Symposium on Forecasting (Riverside, CA, June 24-27).

Post a Comment

Guest blogger: Len Tashman previews Winter 2015 issue of Foresight

*** We interrupt discussion of James H. Lorie's 1957 article with this important announcement ***

Photo of Len Tashman

Len Tashman

Hot off the wire, here is editor Len Tashman's preview of the Winter 2015 issue of Foresight:

Foresight kicks off its 10th year with the publication of a new survey of business forecasters: Improving Forecast Quality in Practice. This ongoing survey, designed at the Lancaster Centre for Forecasting in the UK, seeks to gain insights on where the emphasis should be put to further upgrade the quality of our forecasting practices. Initial survey results, presented by Robert Fildes, Director of the Lancaster Centre, and Fotios Petropoulos, former member of the Centre, examine these key aspects of forecasting practice: organizational constraints, the flow of information, forecasting software, organizational resources, forecasting techniques employed, and the monitoring and evaluation of forecast accuracy.

The survey is an important update to that conducted more than a decade ago by Mark Moon, Tom Mentzer, and Carlo Smith of the University of Tennessee. In his Commentary on the Lancaster survey, Mark Moon applauds the broad focus of the survey but raises the issue of whether the "practicing forecasters” surveyed are “developers” or “customers” of the forecasts.

We often find a significant difference in perception between those who are responsible for creating a forecast and those that use the forecast to create business plans.

In our section on Collaborative Forecasting and Planning, Foresight S&OP Editor John Mello writes that S&OP can not only improve collaboration within an organization, but also “change the company’s operational culture from one that is internally focused to one that better understands the potential benefits of working with other companies in the supply chain.” His article, Internal and External Collaboration: The Keys to Demand-Supply Integration, identifies and compares several promising avenues of external collaboration, including vendor-managed inventory (VMI); collaborative planning, forecasting, and replenishment (CPFR); retail-event collaboration; and various stock-replenishment methods currently in use by major manufacturers and retailers. The critical factor, John finds, is trust:

These processes all require the sharing of information between companies, joint agreement on the responsibilities of the individual companies, and a good deal of trust between the parties, since the responsibility for integrating supply and demand is often delegated to the supplier.

In a Commentary on the Mello article, Ram Ganeshan and Tonya Boone point out that the challenges of external collaboration arrangements are much greater when we consider their Extension Beyond Fast-Moving Consumer Goods, especially those goods with short life cycles. For these products, they argue, a different mind-set is required to achieve demand-supply integration.

Financial Forecasting Editor Roy Batchelor distills the lessons forecasters should learn from the failures to predict and control our recent global financial meltdown. A 2014 International Monetary Fund (IMF) report, Financial Crises: Causes, Consequences, and Policy Responses, examined the world economies’ 2007-09 financial crises to establish their causes and impacts, as well as the initiatives governments and central banks undertook to deal with them. The overall impression from this report, Roy writes in his review, entitled Financial Crises and Forecasting Failures, is that the authorities could have been speedier and more imaginative in their interventions in the financial sector. However, it is important to note that our forecasting models could have given a clearer picture of how economies might emerge from these crises. Roy probes into why the models didn’t see the crisis coming, and what upgrades to the models’ financial sectors might improve predictive performance in the future.

Jeffrey Mishlove’s Commentary on Roy’s review article argues that the real problem did not emanate from predictive failures, but rather from the inclination toward austerity that pervaded economic thinking, especially in Western Europe. Jeff says that, while he can’t argue with Roy’s conclusions that refinements in the scientific method and the gathering of empirical data are appropriate responses to financial crises, forecasts will always be vulnerable to confounding influences from unanticipated variables – no matter how much we refine and improve our methodologies.

Seasonality – intra-year patterns that repeat year after year – is a dominant and pervasive contributor to variations in our economy. But, as Roy Pearson writes in Giving Due Respect to seasonality in Monthly Forecasting, the seasonal adjustments we make to economic data are poorly understood and lead to confusion in interpreting sales changes. Improved accounting for seasonality for monthly forecasts over 12-24 months can lead to better understanding of the forces behind sales forecasts, and very likely to some reduction in forecast errors.

 

Post a Comment

Brilliant forecasting article from 1957!!!

Brilliant, humorous, and obscure. Those words could describe two of my favorite comedians, Emo Philips* and the late Dennis Wolfberg.

They could also describe, with the addition of "exceedingly" brilliant, "scathingly" humorous, and "apparently totally" obscure, a 1957 article, "Two Important Problems in Sales Forecasting" by James H. Lorie (The Journal of Business Vol. 30, No. 3 (July 1957), pp. 172-179).

Lorie is not an unknown. When the article appeared, he was Associate Dean at the University of Chicago, School of Business. He is credited with creating the first database of stock exchange prices, allowing the type of stock analysis we take for granted today.

Yet according to Google Scholar, the article has been cited just 11 times (none since 1991), and never in any of the familiar forecasting journals or texts. I didn't find it last year while researching an article on the role of the sales force in forecasting (Foresight 35 (Fall 2014), pp. 8-13), and only came across it last week cited as a reference -- within a reference -- to Igor Gusakov's "Data-Cube Forecasting for the Forecasting Support System" (pp. 25-32 in the same issue of Foresight).

Problem 1: Combining Statistical Analysis with Subjective Judgment

Lorie first addresses the (still unresolved) challenge of "combining the wisdom of experienced businessmen with statistical analysis...in order to achieve better forecasts." (There is no mention of businesswomen, who apparently didn't exist until Peggy Olson on Mad Men.)

Lorie reviews and critiques the common statistical forecasting methods of the time: regression and correlation. (Recall that R.G. Brown's Exponential Smoothing for Predicting Demand published just the year before.) Of the former,

Perhaps a more fundamental objection to regression analysis as a means for forecasting is that it merely transforms the forecasting problem from the dependent variable to the independent variables. It requires that the analyst forecast the levels of the independent variables such as national income or industry sales rather than the level of the dependent variable, sales of a particular company's product. There is certainly very little reason to believe that forecasters have been markedly more successful in forecasting the kinds of variables which are typically considered to be independent in forecasting equations than they have been in forecasting the variables which are considered dependent.

And of the latter,

In spite of the grave limitations of correlation analysis, it will undoubtedly continue to be widely used. One of the reasons is that it is one of the very few techniques which can be readily learned by people receiving low wages and which has the comforting -- albeit superficial -- appearance of "scientific" precision.

Lorie also notes, as is now accepted in many quarters, that

...it is unreasonable to expect that more complicated massaging of numbers according to conventional statistical techniques is likely to produce very much more successful results in the future.

A similar sentiment appeared in my favorite forecasting article of the 21st century (Makridakis & Taleb, "Living in a World of Low Levels of Predictability," International Journal of Forecasting Vol. 25, No. 4 (Oct-Dec 2009), pp. 840-844):

  • Statistically sophisticated, or complex, models fit past data well, but do not necessarily predict the future accurately...
  • "Simple" models do not necessarily fit past data well, but predict the future better than complex or sophisticated statistical models.

We'll continue the Lorie synopsis in the next post...

------------

*Philips is not so obscure among learned forecasters, as he was quoted in a 2013 Foresight article by Roy Batchelor: "A computer once beat me at chess. But it was no match for me at kickboxing." However, I have yet to find academic citations for Wolfberg's "The Bris" or "The Rigid Sigmoidoscopy."

Post a Comment

ATM Replenishment: Forecasting + Optimization

Why do people steal ATMs? Because that's where the money is!!!

While the old "smash-n-grab" remains a favorite modus operandi of would-be ATM thieves, the biggest brains on the planet typically aren't engaged in such endeavors (see Thieves Steal Empty ATM, Chain Breaks Dragging Stolen ATM, An A for Effort).

And of course, as we learned in Breaking Bad, successfully stealing an ATM (but then insulting your crime partner), can have unfortunate mind-numbing consequences.

The ATM Replenishment Problem

Suppose you operate hundreds of ATMs, processing millions of customer transactions a month. You want to keep your customers happy (no out-of-cash or other down time situations), yet minimize the cost of restocking the machines.

It turns out that managing ATMs is even more difficult than stealing one, and this was the challenge faced by DBS Bank in Singapore.  With a network of 1100 ATMs, there is an ever-present threat of inconveniencing customers any time an ATM runs out of cash, or is otherwise out of service. Replenishment trips are costly (can you imagine the gas mileage on those armored trucks, even with oil under $50/barrel?). And when you reload an ATM that isn't running low on cash, you lose in two ways (wasting resources on an unnecessary trip, and temporarily making the ATM unavailable to customers while being reloaded.)

Fortunately there are bigger brains than the criminals thinking about the ATM replenishment problem. With the help of my colleagues from SAS Advanced Analytics R&D, DBS solved their problem and received top honors from the Singapore government for Most Innovative Use of Infocomm Technology. (See this write-up from Analytics magazine.)

Forecasting + Optimization

ATM replenishment is a perfect example of combining two areas of advanced analytics, forecasting and optimization. For DBS Bank, the first step was to understand withdrawal activity. Withdrawal rate is impacted by many factors, such as location, day of week, day of month, and time of day, and can be dramatically impacted by holidays or other special events.

Once you have a reasonably reliable forecast of customer activity at each ATM location, the next step (which helped DBS win the honors) is to convert the forecast into a daily execution plan for optimal reloading at just the right time. Since implementing the solution, DBS has been able to reduce cash-outs by 90%, reduce the number of customers impacted by the reloading process by 350,000 versus prior year, reduce the amount of returned cash (that was leftover in the ATM when it was reloaded) by 30%, and reduce the number of costly replenishment trips by 10%!

There are plenty of applications of forecasting + optimization outside ATM replenishment. For example, any company operating multiple production or distribution sites (or considering opening new ones) could benefit from a similar approach. First, get a good understanding of the timing and geographical location of customer demand. Then, optimize the placement of facilities or production lines. Revenue management, used by airlines and hotels to dynamically adjust pricing, is another example.

 

Post a Comment

FVA Interview with Jonathon Karelse

Jonathon Karelse

Jonathon Karelse

In December the Institute of Business Forecasting published the first of a new blog series on Forecast Value Added. Each month I will be interviewing an industry forecasting practitioner (or consultant/vendor) about their use of FVA analysis.

The December interview featured Jonathon Karelse, co-founder of NorthFind Partners. Among his key points:

  • Utilize metrics weighted by profit. With limited time and resources, this focuses your attention on actions that impact earnings.

You may not always have reliable data on margin / profit. But if you can trust the numbers, this is a great way to direct your improvement efforts to those products that make the most difference. (Extremely low volume / low revenue / low margin items may not be worth spending any effort on.)

  • Don't overlook measuring forecast bias.

Company forecasts are often overly optimistic. But Jonathon points out the situation where chronic supply shortages have led Sales forecasters to chronically under-forecast (not wanting their targets tied to numbers they don't trust can be built). This can potentially perpetuate the shortages.

  • Compare performance to a naïve model.

The traditional random walk may be considered too simplistic, so Jonathon suggests using a seasonal random walk, simple exponential smoothing, or a moving average. While I suggest always utilizing the random walk as the ultimate point of comparison, I agree that other extremely simple models are appropriate to use for comparison (and they often forecast reasonably well). Early in my career, in a very stable low-growth business, we compared our forecasts to a 52-week moving average.

  • The FVA metric resonates with management.

FVA is easy to understand, and can be a key metric for root cause analysis and corrective action.

Jonathon uses a deseasonalized CV to reduce the risk of false positives. (While high CV generally implies lower forecastability, a highly seasonal item will have high CV but can be quite forecastable if its patterns are consistent and repeating.)

  • Discourage arbitrary performance goals (such as MAPE < 20%).

Focus on improvement. (Here's how to set performance objectives.)

Find the full interview at www.demand-planning.com, and don't miss the money quote:

FVA is easy! If you aren’t using it, you are missing a critical indicator of your organization’s forecasting performance.

Meet Jonathon Karelse at IBF Conference

You can meet Jonathon in person next month, February 22-24, at IBF's Supply Chain Forecasting Conference in Scottsdale, AZ. He will be co-presenting The Art and Science of Forecasting: When to Use Judgment and Forecast Value Add (FVA) Analysis with his client Finning South America.

Coming soon, IBF will post the January FVA interview with Shaun Snapp of SCM Focus.

Post a Comment

Forecasting research project ideas

There are some things every company should know about the nature of its business. Yet many organizations don't know these fundamentals -- either because they are short on resources, or their resources don't have the analytical skills to do the work.

Lancaster Centre for ForecastingThe summer research projects offered by the Lancaster Centre for Forecasting, offer a cost-effective way to get yourself some answers.

Project Ideas

If you haven't done these things already, here are a few of my personal favorite projects to get started:

  • Compare your last year of forecasting performance to a naïve model.

This is the start of any forecasting improvement endeavor -- find out how you are doing today. Don't compare your performance to industry benchmarks, those are irrelevant. Find out whether your process performs at least as well as a simple method, such as a random walk or moving average forecast. (And don't be surprised to learn you are forecasting worse!)

  • Evaluate the volatility of demand for your products or services.

The Coefficient of Variation is a crude and imperfect, yet still useful indicator of the "forecastability" of your demand patterns. Low CV implies that you should be able to forecast fairly accurately with simple methods. High CV implies that you probably can't expect to forecast as accurately -- although some high CV patterns (e.g. something with lots of seasonality but stable, repeating patterns) can be forecast well.

  • Create the "comet chart" relating volatility to forecast accuracy.

Get a visual summary of your forecasting challenges by seeing how volatility and forecast accuracy are related. Use this as motivation to find ways to reduce the volatility of demand patterns.

Map out your forecasting process, and review the last year of forecasts at each step of the process (e.g. statistical forecast, analyst override, consensus override, executive approved forecast). If, like many companies, you aren't recording the data at each step, then start doing so. Use FVA to determine which steps and participants in the forecasting process are tending to make it better. And eliminate those process steps that are just making it worse. (For more information, view the Foresight/SAS Webinar, "FVA: A Reality Check on Forecasting Practices."

  • Replicate Steve Morlidge's analyses of forecast quality.

In a series of articles published in Foresight, Morlidge defined the "avoidability" of forecast error, and illustrated the value of a RAE (relative absolute error) metric for evaluating performance. Read the Foresight articles, find discussion of Morlidge's methodology in  several earlier BFD posts (such as this one), and view his recording from the Foresight/SAS Webinar series, "Avoidability of Forecast Error".

Doing these will give you a good foundation on which to do further research...

Post a Comment

Do you have a forecasting research project?

Lancaster Centre for ForecastingThe Lancaster Centre for Forecasting is seeking Master Student Projects in Forecasting, Data Mining, or Analytics for summer of 2015.

Projects normally run from mid-May to mid-August, with reports issued a few weeks after. These projects are a cost efficient way for a company to carry out analytical work by Master of Science candidates who are formally trained in forecasting. Many of the students have additional skills in areas like marketing analytics, logistics and supply chain, operations research and optimization, and simulation.

Costs are GDP 2,900 (about $4,525 USD) for a single MSc student for four months, plus travel expenses. Discounts available for additional students, non-profits, and SMEs.

Students will normally work on-site at your organization, under the joint supervision of a project leader at your company, and a forecasting expert from the Forecasting Centre. For a well structured problem, the student may remain in Lancaster with no or only occasional site visits.

For more information, and to discuss potential topics, contact:

Dr. Sven Crone

Dr. Sven Crone

Dr. Sven F. Crone

Assistant Professor in Management Science (Lecturer) & Director, Lancaster Research Centre for Forecasting

Lancaster University Management School, Room A53a, Lancaster, LA1 4YX  | T: +44 (0)1524 5-92991 | F: +44 (0)1524 844885 | M: +49 (0)171 4910100 | W: www.lums.lancs.ac.uk/forecasting | E: s.crone@lancaster.ac.uk

Post a Comment

Alfred Hitchcock and a classic forecasting scam

The Forecasting Savant

Suppose you received an email from a self-proclaimed forecasting savant, advising you of a big upset in the upcoming mayoral election...and it turns out to be correct.

You then get an email picking the underdog in the next championship boxing match...which is right again.

Over the course of a few weeks you receive four more such emails, predicting outcomes that turn out to be true.

After six correct picks in a row, this savant has proven his ability, wouldn't you think? And he certainly deserves a cut of earnings on the wagers you've started placing on these forecasts, as well as a healthy gratuity for the forthcoming 7th prediction, a stock pick which is promised to make you rich.

But has the savant established his skills as a forecaster, and does he deserve a big advance for his 7th prediction? Does it make sense to take a large position in the recommended stock?

What Would Alfred Hitchcock Do?

The above scenario (substituting mail for email) plays out in Season 3, Episode 2 of the 1950's television show, "Alfred Hitchcock Presents." Entitled "Mail Order Prophet," E.G. Marshall is the gullible (but so far money winning) recipient of the prophet's prophecies, while Jack Klugman is the skeptical office mate.

Klugman warns Marshall early on, "Don't be an idiot! Nobody can predict the future -- it's impossible." But after failing to act on the first two letters, Marshall is ahead over $1000 placing wagers on the last four, winning them all. He seeks out Klugman's advice on the latest letter, in which the supposed savant requests remuneration for his correct predictions, and promises his next tip is for a stock that will grow 10-fold.

After commenting on Marshall's natty new suit ("Seersucker no doubt? For every seer there's gotta be a sucker") Klugman continues his rant against forecasting, "Predicting the future is a scientific impossibility!" and warns that this is some kind of scam. Yet Marshall remains convinced of his savant's infallible prescience, and "borrows" $15,000 of company funds to buy $30,000 worth of Athabasca Mines on margin.

How You Can Become a Forecasting Savant

Have you figured it out (or seen this scam before)? It goes like this:

1) Send out thousands of emails projecting one outcome to half the recipients, the opposite outcome to the other half. (E.g., incumbent wins or challenger wins.)

2) To the half who received the correct prediction, send another email, again projecting one outcome to half the recipients, and the opposite outcome to the other half.

3) Repeat...each time you'll shrink your pool of "suckers" in half, but they'll be all the more convinced of your future-seeing powers by being correct every time.

4) Monetize the process, requesting a gratuity for all the correct predictions you've made so far, with the offer of an additional forecast that will make them huge money.

5) Pocket your gratuities and flee.

Jack Klugman's Critique of Forecasting

While I'm sympathetic to Klugman's critique ("Any intelligent man knows you can't predict the future"), it needs to be tempered just a bit. As forecasters we cannot expect to predict the future with 100% accuracy. But we can still deliver value to our organizations by assessing the uncertainty of the future, and providing guidance that can lead to improved decision making.

Paul Goodwin's recent Foresight/SAS Webinar provides many good ways of expressing forecast uncertainty, including ranges, prediction intervals, fan charts, and probability density charts. The recorded webinar "Getting Real About Uncertainty," and all previous webinars in the Foresight/SAS Webinar Series, are available for on-demand review.

 

Post a Comment

SAS forecasting and econometrics "tip of the week"

SAS Support Communities provide a forum in which to engage and share with your fellow SAS experts, now in over 20 topic areas including Forecasting and Econometrics. This is the go-to website for your hard core modeling questions, such as "Holdouts in PROC ARIMA," "2-stage Heckman (1979) procedure," and the perennially popular "Moving average forecast."

Ken Sanford

Ken Sanford explaining something complicated

My colleague Ken Sanford just announced a new "tip of the week" series for SAS forecasting and econometrics users on the Communities page. Ken provided a tip on controlling your output in the SAS Output Delivery System (ODS).

This week Bobby Gutierrez of the SAS/ETS development team provided a tip on Fixed vs. Random Effects in Panel Data.

If you get the idea that SAS software can help you build just about any kind of model you want, with data as big as you can get, you are correct.

But if you don't have the time, inclination, or skill set to create your own custom models, let the software do it for you. SAS Forecast Server provides large-scale automatic forecasting, and can combine with SAS Grid Manager to spread the workload and handle the biggest forecast challenges. (See how Wyndham Exchange & Rentals utilizes grid computing to generate 1.4 billion forecasts.)

For businesses operating on a more modest scale, SAS Forecasting for Desktop provides a cost-effective entry point to automatic forecasting.

Here is a 5-minute demonstration of the user interface shared by Forecast Server and Forecasting for Desktop.

If you are already a SAS forecasting or econometrics user, you are encouraged to join the community, post your questions, and share your expertise. And if you aren't already a SAS user, come see what you've been missing.

Post a Comment
  • About this blog

    Michael Gilliland is a longtime business forecasting practitioner and currently Product Marketing Manager for SAS Forecasting. He initiated The Business Forecasting Deal to help expose the seamy underbelly of the forecasting practice, and to provide practical solutions to its most vexing problems.
  • Subscribe to this blog

    Enter your email address:

    Other subscription options

  • Archives