Getting started with Supply Chain Segmentation

All unsuccessful segmented supply chains are alike; each successful supply chain is successful in its own way.” ― Leo Tolstoy Sadovy

Segmentation is the new big thing in supply chain management, or at least it’s an old big thing made new again.  It was the keynote topic at last month’s IE Group Supply Chain Summit in Chicago, and is typically addressed by at least a couple of speakers at every supply chain conference I’ve seen lately.

segmentation12The complexity of customer expectations and service levels, your product portfolio, the global supply chain, varied distribution channels, coupled with the internet and social media, makes moving from an undifferentiated to a segmented supply chain almost an imperative, even though doing so adds a layer of complexity that many manufacturing companies are not ready for.  To read the recent literature on the topic, when you start trying to combine segmentation based on your products with segmentation based on your customers, it goes from merely complicated to overly complex in a heartbeat.

Here’s a short list of just a few of the various segmentation strategies and permutations to consider:

  • Product-driven segmentation:
    • Large volume, long production runs, standardized operations
    • Limited editions, fluctuating demand
    • Made-to-order, low volume, short runs, high margin (high cost-to-serve?)
  • A volume / variability 2x2 matrix
    • High volume commodities
    • High volume seasonal or promotional items
    • Low volume, predictable
    • Low volume specialty or custom orders
  • A typical three-segment retail-oriented model:
    • Regular replenishment
    • Seasonal, but predictable demand (swimwear, lawn fertilizer)
    • Volatile, one-off demand (fashion, new products, promotions)
  • Customer-based segmentation – many ways to do this:
    • Standard, higher quality, or premium service / customization
    • By channel
    • By lead-time service level (build-to-stock, configure-to-order, build-to-order)
    • By customer size, volume or value
    • Other customer characteristics, such as vendor managed inventory, level of data and forecast/POS integration / collaboration, SLA penalties or geography
  • Risk-oriented segmentation, based on political, environmental or economic risk/disruption factors, and on product lifecycle stage considerations

I am a practical sort, concerned primarily with execution.  I want to make Pareto’s Law work for me and go after the low-hanging 80% that only requires 20% of the effort, and I want that first demonstrable success.  Lastly, I would be well advised to dust off the old adage – keep it simple, stupid – and that list of possible segmentation models above looks anything but simple.

The conference keynote case study mentioned above concerned a multinational alcoholic beverage company that was trying to balance the production needs of large volume, stable, established brands with the flexibility needed in a surprisingly innovative market that sees several hundred new products introduced every year.  Their big breakthrough was to move from a one-plant/one brand, one-line/one-product practice (largely inherited via multiple acquisitions over the years) to an agile approach where each line in each plant could handle any combination of product, bottle, label or packaging.  For example, before the changeover, there were some labels that had to be spun on clockwise, and other labels counterclockwise, which just by itself cut the number of available production lines in half.

With that in mind, and based on the success stories and key takeaways I’ve seen presented or in print, I think I’d approach my first supply chain segmentation project in the following manner:

  1. Get a good understanding of my cost-to-serve.
  2. Employ analytic forecasting.
  3. Take a product-oriented approach to the supply chain segmentation.
  4. Deal with my customer segmentation opportunities via inventory and service policy.

Breaking these down a bit further:

  1. Cost-to-serve. Before I do anything, I want accurate product, process, customer and channel costs on which to base my decisions, informed by a cost and profitability management solution that gives me cost output I can trust.
  2. Analytic forecasting. Because it all starts with the forecast. It can only get worse from there. Start higher in order to finish higher.
  3. Product-oriented approach. Yes, it’s inside-out thinking, but it seems to be where all the successful segmentation projects started from. It’s easier to understand and control than either working back from the customer or trying to bite off the entire holistic supply chain in one go.
  4. I’m still going to have to deal with customer and channel differences. What if a high-value customer wants a low-value product? We all know how that story ends – Lola gets what Lola wants. I need to accommodate my premium customers through some post-production combination of inventory policy, customer service/care, and order allocation/commitment process.

I can, however, imagine several scenarios where I might have to start from the customer and work backwards, such as having the federal government as a customer (where mil-spec products might necessitate a holistic supply chain approach all the way back to the farthest tier-n supplier), or when you have significantly different classes of customers who buy through distinctly separate channels. But for all practical purposes, you aren’t going to get one specific segmentation scheme that meets both all of your operational priorities and all your high-priority customer needs (and mitigates all your major supply chain risks).

One final bit of advice from the experts can be summed up as:  One physical supply chain with multiple virtual segmented supply chains running against it.  These virtual supply chains are distinguished by policy, not by brick-and-mortar – inventory, sourcing, production, fulfillment, logistics and service policies.  Because it’s easier to change policy than to change concrete and steel.

As nearly every supply chain expert stresses, one size does not fit all.  You need to select a segmentation strategy that’s right for your business.  But please do select just one appropriate strategy, not some unworkable hybrid. Unsuccessful supply chains are alike in that they tend to be more complex than they have to.

Post a Comment

Big Silos: The dark side of Big Data

big-data-image3The bigness of your data is likely not its most important characteristic. In fact, it probably doesn’t even rank among the Top 3 most important data issues you have to deal with.  Data quality, the integration of data silos, and handling and extracting value from unstructured data are still the most fertile fields for making your data work for you.  [And if I were to list a fourth data management priority it would be, as I described in this previous post (“External data: Radar for your business”), the integration of external data sources into your business decision support process]

Data Quality:  The bigger the data, the bigger the garbage-in problem, which scales linearly with data volume.  Before you can extract value from the bigness of the data, you need to address the quality of the data itself.  If you haven’t been employing robust, scalable data quality tools, now would be the time.

Have we gotten any better at data quality? My personal, one sample survey would indicate that we have not.  With a relatively unusual last name, Sadovy, although only six letters, I’ve seen it misspelled over two dozen different ways in my life, and I thought I’d seen them all by my mid 40’s.  But once my three children became college-aged and started receiving daily credit card offers in the mail, several new ways to misspell my name came to light, a credit to the creativity of today’s automated processing systems.  Even being a Smith/Smythe or Jones/Joens doesn’t leave you immune to a misplaced bit or byte.

Without a focus on data quality, big data just gives you that many more customer names to get wrong.

Data Integration:  If you’ve got a data silo problem, and who doesn’t, then all big data contributes to the process is to make those silos bigger.  Which makes the eventual data integration exercise that much more of a challenge.

Enterprise big data comes at you from a dizzying array of directions – from mainframes and ERP systems, from transactional and BI databases, from sensors and social media, from customers and suppliers. To make matters worse, each of these various sources and applications has its own, sometimes proprietary, data model.

And we’re still not finished with the complexities of this issue yet, because enterprise data has one more endearing quality that makes integration difficult – it’s decentralized and distributed. Extracting value from its bigness by creating one humungous centralized, homogeneous data warehouse is simply out of the question.  If Sartre had been a philosopher of data science he might have said, “Integration precedes value extraction”.

Unstructured Data:  Depending on what study you prefer, it’s claimed that 70 to 90 percent of all data generated is unstructured.  This unstructured bigness doesn’t readily fit into predefined columns, rows, data entry or relational database fields.  Customer feedback, emails, contracts, Web documents, blogs, Twitter feeds, warranty claims, surveys, research studies, client notes, competitive intelligence, often in different languages and dialects … the list goes on. Who has the time to read all this, let alone find an efficient way to extract the latent value from it?

Unstructured data may be both big and bad, but again, with the right tools, it’s not unmanageable. Text mining, sentiment analysis, contextual analysis – there are automated machine learning and natural language processing techniques available today to deal with the volume and ferret out the insights.

Big Data’ is of course a relative term, but when I think ‘big data’ one of the following three data categories seems to be in play:

  • High transaction volumes: Millions of customers, billions of transactions (i.e. ATMs or POS), or tens of thousands of SKUs crossed with other attributes such as retail locations, cost and/or service levels.
  • Temporally dense: Sensor data, audio.
  • Spatially dense: Video, satellite imagery.

The business issue becomes – what do you want to do with all this data? And the place to start is not with the data, or with its bigness, but with the business problems you want to solve, the business insights you want to gain, and the business decisions you want to support.  Starting from there and working backwards to the data means running squarely into the issues of data quality, data integration and unstructured text analytics.  It’s only after you get a handle on this trio of capabilities that you can begin to effectively tap the big data spigots.

Extracting tangible value and insights from high-quality, integrated data, no matter its volume, velocity or variety, is where the payoff lies. Getting to this payoff in an environment where your data is growing exponentially in all dimensions requires an investment in robust data management tools. The consumers of this data, the business users, don’t know or care about its bigness – they just want the right data applicable to their particular business problem, and they want to be able to trust that data. Trust, access and insights – it’s got “quality” and “integration” and “analytics” written all over it.

Post a Comment

Customer Relations by Walking Around

Const1Perhaps nowhere is the saying “time is money” more true than in the construction industry.  There is no better indicator of project cost and budget over/underrun than the number of days on-site.  Reducing that number has a near 1:1 relationship with cost cutting, so it’s no wonder that days on-site is the most watched project metric.

Further complicating matters, the construction industry is well-behind the 3D adoption curve, still relying primarily on 2D blueprints when most other manufacturers have long since moved to 3D CAD-CAM design and production systems, despite the obvious benefits of the application of 3D systems to the construction of 3D physical structures.

Stepping into this breach is Nancy Novak, Vice President of Operations for Balfour Beatty Construction services, a speaker at the IE Group's Manufacturing Analytics Summit earlier this year.  Nancy specializes in applying off-site manufacturing (OSM) techniques to large commercial and industrial projects – one of the most innovative process to recently emerge in this industry.

Or maybe I shouldn’t call it a process, as much as OSM’s intent is to productize the construction industry, to allow it to standardize and reap the benefits of common manufacturing techniques and processes that have been around for decades.  The benefits of OSM include:

  • Faster – Fewer days on-site with a more predictable schedule  (i.e. lower cost)
  • Safer – Less on-site labor, better site logistics
  • Better quality, with a more predictable product

This is the story that Nancy brings to her potential clients.  With each project, she explores with the construction team the possibilities for modular systems that can be assembled off-site and then integrated into the larger structure on-site, whole and in working order, such as bathrooms, kitchen facilities, elevators and staircases, office space, HVAC, interior and exterior walls, and even entire living suites for apartment complexes.

Perhaps the most surprising aspect of her work is how often the client informs her that she is the first person who ever proposed such an approach to them, how often she is the first person to suggest that they take a walk through a current project to assess what improvements might be able to be incorporated into the next one.  Not so much management-by-walking-around as sales, or customer relationships, by walking around.

This is an easy lesson to apply to your own highly-competitive manufacturing business.  Are you tired of the price wars?  Are you looking for a differentiator other than features, functions and performance in a largely mature market?  Are you interested in taking need-based, consultative selling to the next logical level?  Then instead of making the focus of your next customer visit your own products and services, simply ask for the opportunity to walk around their business environment and ask “what if”?

Many your customers will of course have well-defined problems with straight forward solutions, where the the only obstacle is budget, but it’s more likely that their needs and problems are much more nebulous or even completely hidden.  As Henry Ford once famously quipped, “If I had asked people what they wanted, they would have said faster horses.”  Often they are looking for you to be the expert, or, as we often say here in the world of SAS analytics, “tell me something I don’t know”.  To get to the answer, first you need the insight.

I couldn’t possibly list here all the insights you and your customer might uncover, but just to give you a flavor for the types of questions to ask:

  • What can we do regarding custom packaging / logistics that would better suit how you use our product?
  • What services might better be provided on-site or mid-stream rather than all before or after delivery?
  • What if we could manufacture the product in multiple components (or singularly) for easier installation / service?
  • What integration could we be doing with your other suppliers before our product ever gets to you?

If all goes well, this inevitably leads to a discussion around where BOTH parties are making changes to their products and processes to reduce the total overall cost and/or to otherwise make the total end product more competitive. Not just, “What can I do for you?”, but “What can we do together?” The proverbial yet rarely seen "win/win".  Getting to this level of conversation is the best differentiator you could ever have.  You are no longer just a vendor, nor even a ‘strategic supplier’, but a real business partner.  You are no longer competing on price against a dozen other contenders, but are now critical to making your client more competitive in THEIR market.

So what are you waiting for?  Go for a walk – it will be good for you, … and your customer, …and their customer.

Post a Comment

Analogies, mind-mapping and New Product Forecasting

There are two ways you can react to a “Hey – that was my idea” situation.  The first would be to throw a pity party and lament about how unfair life is – if only the car hadn’t broken down and I didn’t have grass to mow and laundry to do I could have filed a patent and been a millionaire by now.  The other is to recognize that you were never going to do anything of the sort under any conditions anyhow and simply take the experience as confirmation bias of how brilliant you are.

1363280485Hofstadter-Surfaces_andWhen I came across Pulitzer Prize-winning author Douglas Hofstadter’s latest work, “Surfaces and Essences: Analogy as the Fuel and Fire of Thinking”, I chose the latter course.  His core theme is that analogies lie at the heart of how we develop concepts, how we construct language, how we understand the world, how we think – something I not only heartily agree with, but a concept I considered myself decades before Hofstadter’s book.

Among other things, I fancy myself an amateur student of language.  You see, as a parent, by necessity you become an amateur student of a whole host of subjects that previously may have never interested you.  For example, I find that parents are three times more likely than non-parents to know that Michael Crichton got it wrong: T-Rex is from the Cretaceous, not the Jurassic, and I have the short video, “Cretaceous Park”, to prove it, created by my then five year-old son, who produced it in order to set the record straight among his kindergarten classmates.

Likewise, as a parent you also quickly become an expert in the field of linguistics as you watch in amazement the literal explosion of language once your children have mastered a basic vocabulary.  They start with what they have in their linguistic toolkit and build on it, making telltale mistakes along the way that shows how their mind is working, such as undressing a banana, or cooking water, or breaking their book, before they’ve learned the verbs peel, boil and tear.

Metaphors come next and get incorporated into the very meanings of words: tables have legs, bellies have buttons, and airplanes get tails.  The analogies get more complex over time, as we encounter windows of opportunity, haunting melodies and watertight reasoning.  These later develop into idioms where sometimes the analogy is still clear, as in ‘bend over backwards’, ‘between a rock and a hard place’ or ‘stacked the deck’, and others where the root is barely discernable, such as ‘kick the bucket’, ‘egg someone on’ or to ‘give someone short shrift’.

One thing I instinctively knew about myself at a young age was that my preferred learning style was by analogy and via storytelling.  Rather than feverishly trying to scribe every single detail into my notes as the teacher or professor spoke, I saved those for later (an especially useful approach in today's Google-age) and focused on relating the main and secondary concepts with each other and with what I already knew, working them into my existing knowledge framework and creating a new, expanded or more complex story about the subject for myself.  I was mind mapping, or concept mapping as I thought of it, way before it became a thing.

This concept of analogies is what lies behind SAS’ New Product Forecasting solution.  New product forecasting (NPF) can be a recurring challenge for consumer goods and other manufacturers and retailers. The lack of product history or an uncertain product life cycle can dampen the hopes of getting an accurate statistically-based forecast.  Here are some of NPF situations you might encounter:

  • Entirely new types of products.
  • New markets for existing products (such as expanding a regional brand nationally or globally).
  • Refinements of existing products (such as new and improved versions or packaging changes).

SAS’ patent-pending structured judgment methodology helps you automate the evaluation and selection of candidate analogous products, facilitates the review and clustering of previous new product introductions, and generates statistical forecasts. This structured judgment approach uses product attributes from prior and new products, along with historical sales, to create analogies.

The use of analogies is a common NPF practice. You can see it, for example, in the real estate market, where an agent will prepare a list of “comps” – similar houses in the area that are on the market or have recently sold – and use this to suggest a selling price.

The structured analogy approach requires two types of data – product attributes (for prior and new products) and historical sales (for prior products). Product attributes can include:

  • Product type (toy, music, clothing, shirts, etc.).
  • Season of introduction (summer item, winter item, etc.).
  • Financial (target price, competitor price, etc.).
  • Target market demographic (gender, age, income, postal code etc.).
  • Physical characteristics (style, color, size, etc.).

The statistical forecast is then built using a structured process based on defining and selecting candidate surrogate products and models.  Furthermore, you can combine this with data visualization to study previous new product introductions to gain a better sense of the associated risks and uncertainties.

Candidate products2

Using analogies to improve your forecasting should not seem at all foreign – you’ve been using analogies since you were a toddler to expand your knowledge base by connecting and building on what you already know.  To find out more, check out this white paper, “Combining Analytics and Structured Judgment: A Step-By-Step Guide for New Product Forecasting”, and learn the details of getting from A to B, of getting from the product history you know to the new product forecast you don’t.  You might call it mind-mapping for your new product forecast; See - analogies are everywhere!

Post a Comment

CaaS – Crime-as-a-Service: Murder on the Internet of Things

Europol, the law enforcement agency of the European Union, in its recently released 2014 Internet Organized Crime Threat Assessment (iOCTA), cited a report by U.S. security firm IID that predicts that the first “online murder” will occur by year end, based on the number of computer security system flaws discovered by hackers.

pacemaker2While there have been no reported cases of hacking-related deaths so far, former vice president Dick Cheney has had the wireless function on his implanted defibrillator disabled in order to prevent potential hackers from remotely accessing his device. Just such a scenario was played out fictionally in the political TV thriller Homeland, in which his counterpart was murdered by terrorists who were able to hack into the (fictional) vice president’s pacemaker.  In an interview last year, Cheney said, “I was aware of the danger that existed and found it credible – [the scene in Homeland] was an accurate portrayal of what was possible.”

Cheney’s fears are not unfounded in the least.  2012 research from security vendor IOActive regarding the security shortcomings in the 4.5+ million pacemakers sold between 2006 and 2011 in the U.S turned up the following:

  • Until recently, pacemakers were reprogrammed by medical staff using a wand that had to pass within a couple of meters of a patient, which flips a software switch that allows it to accept new instructions.
  • But the trend is to go wireless. Several medical manufacturers are now selling bedside transmitters that replace the wand, with a range of up to 30 to 50 feet.  With such a range, remote attacks become more feasible.  For example, devices have been found to cough up their serial number and model number with a special command, making it is possible to reprogram the firmware of a pacemaker in a person's body.  Other problems with the devices include the fact they often contain personal data about patients, such as their name and their doctor. The devices also have "backdoors," or ways that programmers can get access to them without the standard authentication - backdoors available for more nefarious uses.
  • Just as your laptop scans the local environment searching for available WiFi networks, there is software out there that allows a user to scan for medical devices within range. A list will appear, and a user can select a device, such as a pacemaker, which can then be shut off or configured to deliver a shock if direct access can be obtained.
  • As if this wasn't bad enough, it is possible to upload specially-crafted firmware to a company's servers that would infect multiple pacemakers, spreading through their systems like a real virus - we are potentially looking at a worm with the ability to commit mass murder.

Can it get worse?  By now you’ve heard of SaaS (software-as-a-service) and PaaS (platform-as-a-service), but how about CaaS – Crime-as-a-Service?  From the Europol report: “A service-based criminal industry is developing, in which specialists in the virtual underground economy develop products and services for use by other criminals. This 'Crime-as-a-Service' business model drives innovation and sophistication, and provides access to a wide range of services that facilitate almost any type of cybercrime. As a consequence, entry barriers into cybercrime are being lowered, allowing those lacking technical expertise - including traditional organized crime groups - to venture into cybercrime by purchasing the skills and tools they lack.”

Just take a moment to let that sink in: Lowering barriers to entry, criminal innovation, CaaS as a business model.  It really shouldn’t surprise us, though - criminal enterprises have been adapting the principles of sound business management from the early days of organized crime.  Did you know that the illegal drug market is a $2.5-trillion dollar industry? Not merely a billion dollar industry, it’s a TRILLION dollar industry, employing standard business school tactics such as quality control, freemium pricing models, upselling, risk management and branding, not to mention the ever changing supply chain and logistics challenges.

Cyber criminals are at least, if not more, sophisticated than the typical drug trade.

Lest you think that cyber security is primarily the province of the big banks and retailers, how your products will integrate with the Internet of Things (IoT) should make you think twice.

I went over twenty years with the same credit card number - now I have to get a new one pretty much every year because someone got hacked, and I’m guessing that your experience hasn’t been much different.  And remember, these breaches are occurring at large enterprises already employing a significantly sized staff of cybersecurity experts.

If your device is going to be on the internet, security will need to be baked into the design from the very beginning.  Again, from the Europol report:  “"The Internet of Things represents a whole new attack vector that we believe criminals will already be looking for ways to exploit. The IoT is inevitable. We must expect a rapidly growing number of devices to be rendered 'smart' and thence to become interconnected. Unfortunately, we feel that it is equally inevitable that many of these devices will leave vulnerabilities via which access to networks can be gained by criminals.”

Rod Rasmussen, the president of IID - the source of the murder prediction mentioned at the beginning of this post - had this to say: "There's already this huge quasi-underground market where you can buy and sell vulnerabilities that have been discovered. Although the first ever reported internet murder is yet to happen, ‘death by internet’ is already a reality as seen from a number of suicides linked to personally-targeted online attacks.”

While it’s unlikely that anyone will die from a stolen credit card number, that’s not going to be the case for many of the tens of billions of devices attached to the internet, from medical devices to wearables to the connected car.  As a manufacturer of current and potential IoT devices, you may not be aware of SAS’ dominant presence in the fraud detection/prevention and cybersecurity field.  When you get a call from your bank freezing your credit card and questioning that $3,500 purchase at a shopping mall in Altoona, it was likely SAS analytics behind the scenes that identified and flagged the fraud.

If CaaS is going to be part of the criminal elements’ business model, cybersecurity will need to be part of your product design and IoT business model, and SAS can help.  While your brand may survive a variety of production quality problems, it won't survive a murder on the IoT.


[You can also learn more about cybersecurity from Ray Boisvert, CEO and founder of I-Sec Integrated Strategies, at his presentation, “The Threat Landscape: Cyber Tools and Methods Transforming the Business Environment,” at SAS' Premier Business Leadership Series, Wednesday, Oct. 22, from 2:15 to 3 p.m. Boisvert sees cybersecurity as a task for analytics that can help organizations tease out the proverbial signal from the massive internet “noise” around serious threats. The challenge is to identify the right threat vector related to the most valued elements an organization holds dear. The organization will only be successful if it has technology to quickly digest huge streams of data, in real time, so that it may begin to see patterns that can thwart further attacks.]

Post a Comment

East meets North: Integrated Business Planning for both efficiency and alignment

Sales and Operations Planning (S&OP) started out with big aspirations.  As initially conceived, S&OP was to cover the entire domain now called Integrated Business Planning (IBP).  As S&OP process implementations rolled out during the 1980’s, this broad scope turned out to be a bit much to attempt in one bite.  S&OP instead settled effectively into a more focused and limited role, and it would be another decade before a new attempt, and a new name, IBP, re-emerged to tackle the larger picture.

So what is the difference between the two, and does it matter?

Briefly, S&OP is the balancing act between supply and demand.  It sets the production plan for the upcoming period based on the unconstrained sales/demand forecast but informed and adjusted for other supply chain constraints such as capacity, material supply, inventory levels, logistics, and customer lead-time requirements.


S&OP is a process focused on the EFFICIENCY of the production process.  In terms of Treacy and Wiersema’s three Value Disciplines, S&OP is concerned with the efficiency and effectiveness of the horizontal OPERATIONAL EXCELLENCE value discipline.   Note that while the prescription of the Value Disciplines is that organizations should focus on achieving excellence in primarily only one of the three, all three are always present, so even if your chosen value discipline is Customer Intimacy or Innovation, you still have an operational aspect to your business that needs to be optimized.

What does IBP bring to the party that S&OP lacks?  Alignment.  Financial and strategic alignment.

A couple of years ago I wrote this post (“The Nine-Foot Aviator”) about what first steps to take when attempting to institute an IBP process.  I had to admit then that, despite the brilliant description of east-west versus north-south processes by Gartner’s Noha Tohamy, I and much of the audience that heard her presentation still seemed confused over definitions and boundaries and roles and functions when it came to differentiating S&OP from IBP.  It was all a bit fuzzy, although you could see hints of the resolution in the IBP calendar shared at the IE Group conference by Verso Paper’s Michael Partridge – review sessions that included finance, risk and general management in addition to the usual production, demand and supply suspects.

However, if you approach the two processes from the perspective of the Value Disciplines, the distinction becomes obvious (see "The Sound and the Fury" for the connection between the value disciplines and your value-creation business processes).  As I mentioned above, S&OP operates primarily within the horizontal, east-west, operational discipline, with the aim of improving the efficiency and effectiveness of that discipline.  IBP, however, takes a broader, north-south perspective – the ALIGNMENT of S&OP and the Operational Excellence discipline with the organization’s financial and strategic objectives.  What is optimal for the horizontal production and supply chain process might not be optimal for the business as a whole.  Examples include:

  • Does the S&OP plan meet cash flow, earnings, revenue and margin objectives?
  • Does it meet the company’s risk appetite / profile, and are the identified risk mitigation plans acceptable?
  • Does it comply with safety and sustainability requirements?
  • Does it appropriately support marketing, new product/territory expansion and commercial initiatives?
  • Does it align with other strategic objectives, such as quality or customer retention?

In other words, IBP does two things that S&OP does not:

  • It aligns one value discipline, operations, with the other two – innovation and customer relationship management.
  • It aligns the Operational Excellence value discipline with the broader, high-level strategic objectives of the organization as a whole.

In order to fulfill the promise of IBP, the key takeaway from this would be to move beyond just the alignment of S&OP and Operations and generalize the intent and scope of IBP to include all three value disciplines.   In most companies the product development and customer relationship value disciplines have their own internal efficiency and effectiveness processes , maybe not as complex as S&OP but every bit as critical, especially if one of them is the organization’s chosen strategic focus.  It should not be sacrilegious to expect for R&D to regularly check its alignment with the company’s marketing direction or customer service performance, nor for marketing to likewise understand product development roadmaps and for sales to proactively be aware of the ever changing array of production and development constraints that could impact client relations.

IBP as a concept has wider applications than just policing S&OP, and you can use the process as applied broadly to manage your business holistically, complementing horizontal business-process efficiency with vertical and strategic alignment.  Via S&OP, East has already met West – with IBP, East gets introduced to North and South as well.

Post a Comment

The Cloud and other forces – Climate change, or just the weather?

SD thunderstormI’ve been having trouble getting a handle on the relationships between the nexus of forces / third platform themes  of social media, mobility, big data, analytics, and the cloud, and it made me feel better that someone like Geoffrey Moore, world-renowned author of “Crossing the Chasm”, seems to be in the same boat.  If you haven’t run across it yet, Geoffrey Moore is an official LinkedIn “InFluencer”, and you can 'follow' him on LinkedIn and read up on some of his recent insights on his author page.   Moore has been tackling these topics for several years now, and you can watch especially how his opinion of the Cloud has evolved over time.

In sorting these themes out, I wanted to assess them outside the influence of those parties who have a stake in how the subject gets framed, and also from the point of view of, “So what? – What’s in it for me?”

Mobility” seems to be the easiest to classify, but the repercussions are going to be monumental.  No need for a ‘third platform’, mobility fits neatly into the client-server model, or maybe it should now be called ‘device-server’. Or rather, there are a range of devices in the client role that run the continuum from thick client to sensor.  In between are smart devices of all flavors, such as phones, in-car GPS and home appliances.  Far from being just an IT problem (i.e. BYOD), mobility impacts everyone from marketing to operations, as everything from people to products goes mobile.

Big Data’ is of course a relative term, but when I think ‘big data’ one of the following three data categories seems to be in play:

  • High transaction volumes: Millions of customers, billions of transactions (i.e. ATMs or POS), or tens of thousands of SKUs crossed with other attributes such as retail locations, cost and/or service levels.
  • Temporally dense:  Sensor data, audio.
  • Spatially dense:  Video, satellite imagery.

The business issue becomes – what do you want to do with all this data?  Is it just a matter of storage, in which case Hadoop might be called for, or does the value come from real-time event stream processing, or does the data serve as the foundation for the further application of analytics and the extraction of metadata?  But does Big Data constitute a fundamental building block for a new computing platform?  By itself, I don’t think so – evolutionary rather than revolutionary.

Analytics’ always has the potential for revolution, because it is in the unique position of being able to respond to the requirement, “Tell me something I don’t know”.  How much risk is in that forecast?  What’s the optimal product mix given certain production constraints?  What’s the next best offer to make to that customer in the store or on the web?  What are our customers saying about us on social media?  Insights like that are more than a system, platform or architecture.

The Cloud.  Is it just an outsourcing model / just another business model, or is it going to be as disruptive as its proponents advocate?  For the time being, the answer seems to depend on which side of the equation you find yourself.  If you are in the IT business, the potential for disruption, either by yourself or your competitors, is keeping you awake at night.  Those on the receiving end, however, currently seem to view the Cloud primarily from a cost basis, from the motivation to cut the costs and risks of hardware acquisition, maintenance, and software migrations.

While this might be a good foot in the door (or sticking your head in the cloud?), it would be best not to dismiss too quickly what the cloud visionaries have envisioned.  Two potentially important cloud applications to keep in mind are:

  • SaaS as a way to acquire capabilities you could never support in-house, especially niche applications that could add considerable value but don’t currently generate enough internal critical mass.  Watch this space – SaaS as a business model will enable a plethora of new applications that were previously barely imaginable.
  • PaaS as an internal IT business model.  You won’t be outsourcing everything, there will still be mission critical enterprise apps that you manage in-house, but in order to meet your internal business client needs you are likely going to have to be more cost competitive and more flexible with your IT resources.  As an effective business model, PaaS needn’t remain the sole domain of the big boys.

That leaves us with Social Media.  Honestly, I don’t know how to classify it.  It’s a game changer, most certainly.  As I’ve mentioned in a previous post, I’m quite the fan of Marshall McLuhan and his observations on the media.  If “the medium is the message”, what is the message of Social Media?  The implications of “social” as both a medium and a message are likely to be both subtle and far reaching.  This isn’t about “digital” at all – this is about brand reputations and networks and experiences and influencers and chaos.  Your current struggles with big data or with BYOD security are just a taste of what’s to come in the social arena.

If I had to put a label on them, Big Data feels like weather, like a cold front passing through.  Mobility is the storm itself.  For now, the Cloud is like the seasons – winter if you are in one hemisphere but summer if you are in the other.  Analytics is a longer trend, an El Niño with global consequences.  And social media? Climate change for sure – but whether it’s a runaway greenhouse or the next ice age remains to be seen.

Post a Comment

The tragedy of overcapacity

Capital investment in production capability is the weakest link in the business value chain.  It always has been and likely always will be.  It’s the driving force behind the tendency towards cartels, collusion and monopolies.  While it can make the first entrant into a brand new market, in the long run it usually breaks everyone, playing no favorites between the initial innovators and the late comers.  It’s the problem of supply and demand in its purest form.

To illustrate the conundrum, imagine a market where each player has 20% of the total market production capacity.  The first into the market enjoys the premium pricing benefits of a green field with no competition.  Others soon join the party.  With just four participants, everything is still peachy – only 80% of the demand can be met, so there is still plenty of margin to go around.  Things tightens up with the entry of the fifth player – margins are still acceptable, but growth must now come at the cost of someone else’s market share.

overcapacity2But it’s that sixth entrant that ruins the fun for everyone.  The industry is now over capacity.  Prices get driven downward to the level of variable costs, but the elasticity of demand never seems to respond proportionally – after all, I only need one refrigerator and one washing machine, and I can only make use of so many TV’s or even automobiles.  It’s essentially Garrett Hardin’s famous “Tragedy of the Commons”, where everything is fine with 99 cattle on the pasture, but where the 101st leads to overgrazing and certain environmental / economic collapse.

If you are an executive in manufacturing, you have seen this effect many times over, and it has become the bane of your existence. The opposite, lack of capacity leading to pricing discretion, is typically a euphoric but short-lived phenomenon that might happen once in your career if you are lucky.

Excluding bankruptcy or getting out of the market altogether, there are six or seven principle ways in which most businesses attempt to address this iron law of economics.

  • Driving cost out of the production process and out of the supply chain.  As most efficiency gains are readily copied and adapted across the industry, this is seldom a long term differentiator, but it does become table stakes – if you can’t quickly improve on costs and efficiency, you’ll be the first to fold.
  • Increase the value throughput.  This approach gives rise to the familiar feature / function / performance arms race.  But unless one of the market players cannot keep up with the innovation, this doesn’t do much for the capitalist other than set the variable cost bar higher.
  • A focus on quality.  History and economics tend to look favorably on this strategy.  Although at first glance an investment and focus on quality would seem to be no different than investment in feature / function / performance, the famous Faster-Cheaper-Better triad shows that "better" (i.e. quality) has long been differentiated from "faster", with its own value proposition, especially as it relates to brand loyalty and equity.
  • De-risking via outsourcing of production.  Huge segments of the manufacturing industry have spent the last couple of decades doing just that.  If your core competency and primary differentiator is product innovation, this might make sense.  Why bother with the risk of building a factory if production efficiency is not your strength?  What’s lost in the bargain is operating leverage and some control over your product quality and availability.  Should your product become the biggest thing since sliced bread, without the operating leverage you make the same profit on your billionth shipment as on your first.
  • Consolidation, coopetition, and secondary markets.  Excess and unproductive capacity is often the mother of invention.  Getting your facilities back up to 100% utilization can require some creative thinking on your part, from running unrelated products through your plant to developing secondary markets for your primary products.
  • The flexible factory.  Now we’re talking.  You are no longer stuck with being able to run only one product through the production line.  Your industrial engineers set you up for long term success with a flexible factory floor design, underpinned by robotics, programmable or soft automation, analytics, and a flexible approach to IT and data management.  The key to making this work, however, is “knowing when to fold them”, having the discipline to regularly get out of the commoditized, low margin products in favor of the innovations your design team is bringing forth.
  • A focus on the customer, and the customer experience.  While you can’t control how much excess production capacity comes onto the market, you can do a lot about your downstream access to the customer – another approach that seems to be highly rewarded by the market.  It’s one thing to build it, or even to build it cheaply, and quite another to be able to match that capacity with a paying customer on the other end.  This in turn requires knowing your customers, the market, and the distribution channels via customer analytics, understanding the broader market via demand forecasting, and paying attention to the service and aftermarket aspects of your total offering.

Regardless of what the strategic or value discipline approach of any particular firm might be, in the end, if product is going to be made then the capacity has got to be built, and the capital investment committed to.  We can’t outsource manufacturing capacity off the planet.  Someone with a core competency in production is going to take the risk, and then take the necessary steps to manage and mitigate that risk.  This will likely result in higher outsourced versus insourced product costs (once we get past this current phase of global labor arbitrage).  While this may be an acceptable outcome for those businesses specializing in only product design / innovation, most of the manufacturing industry will find itself concentrating on some combination of those other key differentiating characteristics mentioned earlier:  flexibility, quality, service, information management, analytics and the customer.

Post a Comment

Why analytic forecasting?

Candidate productsBecause you are already halfway there and you should want the entire process to be data-driven, not just the historical reporting and analysis.  You are making decisions and using data to support those decisions, but you are leaving value on the table if the analytics don't carry through to forecasting.  In the parlance of the domain, don't stop with just the descriptive analytics while neglecting the power of predictive and prescriptive analytics.

Descriptive analytics relies on the reporting and analysis of historical data to answer questions up until a particular moment in time.  Using basic statistics such as mean, frequency and standard deviation, it can tell you what happened, how many, how often and where?  With the application of additional statistical techniques such as classification, correlation and clustering, you end up with an explanatory power that can sometimes even tell you ‘Why’.

In the terminology I proposed in this earlier post, “The Skeptical CFO”, descriptive analytics covers the first two of my first four points:  “Where am I right now”, and “What is my ability to execute”, the latter typically surfaced through a BI capability that computes and displays the historical data in the form of metrics for ease of standardization, comparison and visualization.

FS model vs FAW model
But why stop there?  Why stop your data-driven approach to decision making at the halfway point, at the vertical bar in the above graphic?

Your decisions are always about the future – what direction to take, where to invest, what course corrections to make, what markets to expand into, what and how much to produce, who to hire and where to put them.  In other words, a forecast, the third of my four points, with the fourth being perhaps the most important of the lot - a confidence level or uncertainty measurement about that forecast, these last two coming from the realm of predictive analytics.

Even if you’re not comfortable using the statistical forecast straight out of the box, don’t you at least want to know what it indicates?  What data-driven trends and seasonality it has on offer?  And wouldn’t you appreciate having a ballpark estimate of the risk and the variability that is likely inherent in any forecasting decision?

Getting back to that “straight out of the box” issue, the truth is, for roughly 80% of your detailed forecasting needs (depending of course on the quality of the data and the inherent forecastability of the item in question – see: “The beatings will continue until forecast accuracy improves”), the machine is going to be as or more accurate than you, and much, MUCH faster at it.  The forecast analyst workbench listed below can generate incredibly high-volume forecasts at the detailed level (i.e. SKU, size, color, style, packaging, store, expense line item, cost center …) in short order, leaving the forecast analyst free to spend the bulk of their time improving on those hard-to-forecast exceptions.

Lest you doubt the veracity of my 80% (+/-) claim above, the collaborative planning workbench (below), in addition to facilitating the consensus forecast you would expect from its name, also includes a Forecast Value Add capability to identify and eliminate those touch points that are not adding value.  You would be surprised at how many reviewers, approvers, adjustments, tweaks and overrides actually make the forecast worse instead of better (then again, maybe you wouldn’t).

Can you be data-driven when it comes to new product forecasting?  If you’ve got the structured judgment / analogy capability of the new product forecasting workbench then the answer is yes.  It uses statistically determined candidate analogies or existing surrogate products with similar attributes to provide an objective basis for predicting new product demand.

Beyond predictive analytics, which provides answers to 'What if these trends continue', and 'What will happen next', lies prescriptive analytics – what SHOULD I do; what’s the best, or optimal, outcome?  The inventory optimization workbench optimizes inventory levels across a multiechelon distribution chain based on constraining factors such as lead times, costs, and/or service levels.  And just as with the forecasting component, 80% of the optimization can be automated, again leaving the inventory analyst free to focus on hard-to-plan or incomplete orders.

When you hear the word “optimization” in this context, think of two elements: a forecast, and a corresponding set of constraints.  Knowing that context, you can see why SAS has taken this integrated workbench approach to demand-driven planning.  A common foundation and data repository enables the consensus forecast, as well as collaboration between the forecast and inventory analysts.  Even the purely descriptive component, the demand signal analytics workbench, is completely integrated with the same demand signal repository that will eventually build the forecast and the inventory plan.


When it comes to decision support, don’t settle for halfway.  Because half of the value-add lies to the right of that vertical line. It all starts with the forecast, which drives the integrated business planning (IBP) process and is its largest source of variation and uncertainty. Improving the forecast will affect everything downstream. And, it can have a multiplier effect as it travels along the IBP process. Even slight forecasting improvements can have a larger proportional effect on revenue, costs, profit, customer satisfaction and working capital than any other factor – financial, supply-oriented, or otherwise.

Get the forecast right, and good things will follow.

Post a Comment

Impending Crisis: Analytics for the top-line

I’m sitting here staring at a book on my shelf entitled, “Impending Crisis”.  Even knowing the copyright date, 2003, it could still be about any one of several possible crises: healthcare, financial, energy, education, environment.  But no, in this case the impending crisis in question is provided by the subtitle: “Too many jobs, Too few people.”  A perfect storm of demographics, education and technology that was supposed to hit the Western economies by the end of the decade, a crisis ultimately stillborn, upstaged and derailed by its antithesis – The Great Recession, with its concomitant double-digit unemployment.

Predictive-Analytics-for-Human-Resources-Wiley-and-SAS-Business-SeriesBut still, it was there on my bookshelf for a reason.  If the derivative-driven economic implosion of 2008-‘09 had never happened, the book’s thesis represented a most likely case.  At the time, the US Bureau of Labor Statistics was predicting an overall workforce shortage in the US of about 10 million workers by 2010.  A decade has now passed since its initial publication, so besides the Great Recession, what else has changed?

The demographics are what they are, but with everyone now placing an additional ten candles on their birthday cake.  The state of education may be even worse, with No Child Left Behind turning into No Child Left Untested.  The cost of higher education is the fastest increasing segment in the national economy, outpacing even healthcare, as the ratio of full-time faculty to management and staff declined from about 2:1 twenty years ago to roughly parity today.

Technology seems to be the big unknown.  For a thorough perspective, I highly recommend this study from the Pew Research Center: “AI, Robotics and the Future of Jobs”.  To illustrate what a challenge this subject is, the nearly 2,000 respondents were roughly evenly divided on the question of the future of jobs, with 52% taking the non-Luddite view that there is nothing as constant as change and that in the end more jobs will be created than lost.  The other 48% would likely find themselves more in agreement with Bruce Springsteen, who wrote in “My Hometown: “These jobs are going boys and they ain't coming back”, their main supporting point being, ‘You want evidence?  Just look around, it’s happening now, it’s happening everywhere, it’s been happening since at least 1990 if not before.’

One more datum to add to the mix:  $6 trillion.  Or make that $35 trillion if you are thinking globally.  That’s the annual labor cost in the US / World respectively – representing 40% of US GDP, 50% at the global level.  So when you impact labor productivity by more than a few percentage points, you’re likewise talking trillions (for comparison, energy costs run about 10% of GDP).

Stepping into this ill-defined, undiscovered country from which perhaps no job returns, is strategic human capital expert Jac Fitz-enz and his co-author, John Mattox, with their new book, “Predictive Analytics for Human Resources”.  What makes this such a worthwhile read for anyone interested in applied analytics is the authors’ broad general business experience.  If you want, you can take their analytic approach completely out of its HR context and drop it wherever you are facing an analytic need.  Whether it’s the chapter on ‘Getting Started’, or ‘Data Issues’, or ‘Analytics in Action’, or my nomination for best-in-show, ‘Developing an Analytic Culture’, this is the analytics primer you’ve been searching for, no matter whether your business problem is quality, customers, process or people.

While the obvious application of analytics to human capital (see my prior post, “Strategic Workforce Planning”) is the cost impact, hence the $6 trillion reference above and its prominence throughout the book, I want to direct your attention to the issue from the other direction, the top-line versus the bottom line, and the mixed realities of the post-recession employment picture.  Moreover, I want to tie all of this into another important business paradigm – Treacy and Wiersema’s ‘Value Disciplines’.

Official unemployment in the US currently stands a tad over 6%, with the unofficial rate, which counts those who have stopped looking for work, at 14%.  The comparable figure for the EU is slightly over 10%, with the extremes running from 5% in Germany to over 25% for Spain.  With that many people out of work, who needs workforce analytics?  Just run the ads and take the lowest bidder, right?

Not so fast.  If your chosen Value Discipline is Operational Efficiency, then you most likely aren’t hiring in the Western economies anyhow, you moved those jobs offshore long ago.  On the other hand, if your Value Discipline is Innovation or Customer Intimacy, cost is not your primary concern (a truism whether your specific business problem is workforce, or something else like quality, innovation, service or retention, and a truism your approach to analytics should reflect).

What is your concern is the shortage of STEM and skilled workers - the lingering high unemployment rate being a rather asymmetrical affair, primarily affecting the lower skilled job classifications. Besides, you’re not looking for the cheapest engineer, scientist, cyber-security specialist, nurse or marketer.  There are multiple stories making the rounds of manufacturers in rural, low-wage regions of the country with 100 applicants for each shop floor position, but unable to find and attract the design and manufacturing engineers and the management to run the place.  Silicon Valley’s recently uncovered anti-poaching cartel is certain proof for the reality and seriousness of the issue.

The benefits of using an analytical approach to addressing STEM and skilled workforce management issues will show up in the revenues, not just on the bottom line, of those companies that depend on innovation, quality and customer service as the foundation of their business model, and who need the right people, not just the least expensive, to make that business model work.  As the saying goes, you can’t just save your way to prosperity, eventually you need to put the emphasis on growth.

Lest you think this STEM shortage is fairly straight forward and one-dimensional, let me scare you to death with reference to this series of posts on LinkedIn by Heather McGowan - “Jobs are Over: The Future is Income Generation”  (the link is to Part 2 of this four-part series, Part 2 being where I became truly frightened for my children’s future) (and I won’t even get into the unnerving picture that Fitz-enz paints at the end of Chapter 7 of his book – I’ll leave that for you to discover – just don’t be taking any three-day weekends).

Here’s McGowan in her own words: “The era of using education to get a job, to build a pension, and to then retire is over. Not only is the world flat, but this is the end of employment as we once knew it. The future is one of life-long learning, serial short-term employment engagements, and the creation of a portfolio of passive and active income generation through monetization of excess capacity and marketable talents.”

Let that sink in for a bit.  A future that some might call entrepreneurship, but others might label ‘gigs’, everyone a temporary contract worker, no benefits, competing to create monetized portfolios (how would you have fared in your twenties or thirties, trying to start a family, under such conditions?).  Is your business ready to address a workforce strictly defined by contractual short-term gigs and monetized marketable talents, whatever that might mean?   While you might initially think you’ve got the upper hand when it comes to employment negotiations with such relatively insecure ‘income generation seekers’, focusing on cost, as I mentioned before, would be missing the point for most organizations.

I’m not saying McGowan is right (and I’m hoping she’s wrong), but I do have to admit that the trends she identifies are all already here. The ‘market economy’ is becoming the ‘market society’, with little indication that this socio-economic movement is slowing down let alone running into obstacles that might halt it. In such an environment, and without a far-sighted, disciplined and analytic approach to workforce planning and management, you’ll end up with a top-line going nowhere fast, and a bottom-line spelled I-R-R-E-L-E-V-E-N-T.

Post a Comment