Coyotes, Cougars, and An Operational Definition of "Demand"

0

Sorry about not getting a post out last week, but I spent a good part of it cowering under my desk in fear. The SAS Security office issued a warning that there were wild coyotes roaming the campus, and I was having post-traumatic flashbacks to a painful encounter I once had with a cougar during my late teens. While coyotes can be wily and ill-tempered, I now realize they aren’t nearly as aggressive as cougars, so there was little to fear. (Fortunately I don’t have to fear cougars anymore either, because at my age they no longer consider me an appetizing target.)

That brings us to today’s topic, creating an operational definition of “demand.” Those of us in forecasting use the word demand every day, and we don’t think much about it – it seems pretty straightforward. Demand is commonly characterized as “what the customers want, and when they want it,” sometimes with the added proviso “at a price they are willing to pay, along with any other products they want at that time.” Sounds good to me.

When we refer to demand, we really mean “unconstrained” or “true” demand, because we take no consideration of our ability to fulfill it. We use the phrase “constrained demand” to describe what’s left after incorporating any limitations on our ability to provide the product or service demanded. Thus, constrained demand ≤ demand. So far so good -- do I really need to devote a blog entry to something so self-evident?

This definition of “demand” is not problematic until we try to operationalize it, that is, when we start to describe the specific, systematic way to measure it. This is kind of important if we ever expect to measure the accuracy of our “demand forecast.” We need to know what actual demand really was!

Let’s work through an example I originally wrote about in the Summer 2003 issue of Journal of Business Forecasting, dealing with this issue at a manufacturer.

If customers place orders to express their “demand,” and if the company services its customers perfectly by filling all orders in full and on time, then we have our operational definition. In this case, demand = orders = shipments. If both order and shipment data are readily available in the company’s system, then we have the historical demand data, which we can use it to feed our statistical forecasting models and measure our forecasting performance.

Unfortunately, few organizations service their customers perfectly. As such, orders are not a perfect reflection of true demand. This is because when customer service is less than perfect, orders are subject to all kinds of gamesmanship. Here are a few examples:

1. An unfilled order may be rolled ahead (carried over) to a future time bucket.
2. If shortages are anticipated, customers may artificially inflate their orders to capture a larger share of an allocation.
3. If shortages are anticipated, customers may withhold orders, or direct their demand to alternative products or suppliers.

In the first example, demand (the order) appears in a time bucket later than when it was really wanted by the customer. Rolling unfilled orders causes demand to be overstated -- the orders appear in the original time bucket, and again in future buckets until the demand is filled or the order is cancelled.

In the second example, the savvy customer (or sales rep) has advanced knowledge that product is scarce and will be allocated. If the allocation is based on some criterion such as “fill all orders at X%,” the customer simply over-orders and ultimately may receive what it really wanted in the first place.

The third example not only contaminates the use of orders to reflect true demand, but it can also cause significant financial harm to your business. If you are in a situation of chronic supply shortages (due to either supply problems or much higher than anticipated demand), customers may simply go elsewhere. Customers may truly want your product (so there is real demand), but it won’t be reflected in your historical data because no orders were placed. While orders are often perceived as “equal to or greater than” true demand, this third example shows that what is ordered may also be less than true demand.

As with orders, the use of shipments to represent demand has a number of potential problems. Shipments are often perceived as “equal to or less than” true demand. Thus, shipments and orders are thought to represent true demand’s lower and upper bounds. But, as we see in example 3, orders can be lower than the true demand. Furthermore, by example 1, shipments can actually be greater than true demand. (This would occur when an unfilled order is rolled ahead into a future time bucket and then filled. In this situation the shipment occurs later than the true demand, and inflates demand in the time bucket in which it is finally shipped.)

The planning process starts with a compilation of unconstrained or “true” demand history. We feed that into our statistical software and generate an “unconstrained forecast.” In order to measure our forecasting performance, we need to know the actual demand that really occurred. Herein lies the problem. There may be no way of knowing what true demand really was. The manufacturer knew what orders it received, and knew what shipments it made. But neither orders, nor shipments, nor any other readily available data element is the same as true demand. Therefore, we cannot measure the accuracy of our unconstrained forecast.

The lesson is to always report forecasting performance against the constrained forecast. The constrained forecast represents what the manufacturer actually expects to ship, what the retailer actually expects to sell, and what the service organization actually expects to provide. The constrained forecast is our best guess at what is really going to happen, and we can measure what truly does happen. While we can measure the shipments, the sales, or the services provided, we cannot with certainty measure what was truly demanded.

 

Tags
Share

About Author

Mike Gilliland

Product Marketing Manager

Michael Gilliland is a longtime business forecasting practitioner and formerly a Product Marketing Manager for SAS Forecasting. He is on the Board of Directors of the International Institute of Forecasters, and is Associate Editor of their practitioner journal Foresight: The International Journal of Applied Forecasting. Mike is author of The Business Forecasting Deal (Wiley, 2010) and former editor of the free e-book Forecasting with SAS: Special Collection (SAS Press, 2020). He is principal editor of Business Forecasting: Practical Problems and Solutions (Wiley, 2015) and Business Forecasting: The Emerging Role of Artificial Intelligence and Machine Learning (Wiley, 2021). In 2017 Mike received the Institute of Business Forecasting's Lifetime Achievement Award. In 2021 his paper "FVA: A Reality Check on Forecasting Practices" was inducted into the Foresight Hall of Fame. Mike initiated The Business Forecasting Deal blog in 2009 to help expose the seamy underbelly of forecasting practice, and to provide practical solutions to its most vexing problems.

Comments are closed.

Back to Top