Charles (Dickens and Darwin) and continuous improvement

Whales"You show me a successful complex system, and I will show you a system that has evolved through trial and error."  ~ Tim Harford

TED Talk link:  http://www.ted.com/talks/tim_harford

 

 

Karl Marx died thinking that the first communist revolution would occur in Great Britain, driven by the long hours and unsafe / unhealthy conditions in the factories, and the rampant urban squalor and poverty so memorably illustrated by Charles Dickens.  Pre-industrial, agrarian, peasant Russia would never have even made his list of potential candidates.

With hindsight and a more robust economic theory to guide us, it seems pretty clear now that pre-War England was economically complex beyond the point of no return for anyone to have seriously entertained installing a centrally-controlled economy, and probably had been so for more than a century, ever since Robert Walpole, Great Britain’s first Prime Minister, single-handedly invented the modern state financial system.

On the other hand, his megalomanic psychopathology aside, it’s easy to see how Stalin, with no history of having lived or worked within a developed, industrial economy, could have imagined it entirely possible to centrally control his newly/barely industrialized, still largely agrarian post-War economy, hence his succession of failed five-year plans.

The same can likely be said for the large, modern corporation – that it too is largely past the point of no return when it comes to centralized control.  I made a point in this previous post (“Metrics for the Subconscious Organization”) that your business “functions day-after-day, minute-by-minute, without your active control or even your conscious knowledge – this is an organization that has long since learned what to do and pretty much runs itself.”

How does a large commercial organization manage to coordinate itself so well?  The market gets by with a single mechanism, price, whereas the larger society within which markets operate has a more complex set of values (i.e. safety, health, education, civil rights, etc …) and thus requires a broader toolkit, including laws, regulations and policy.  In this sense, a large commercial business is more like a society than a market, coordinating itself with a wide array of policies, incentives, metrics, strategic objectives, values, mission statements, stories and leadership.

That’s all well and good for day-to-day operations, staying on an even keel, maintaining stasis.  But what about when you want to bring about change to your organization?  In this post (“Changing corporate culture is like losing weight”), I addressed the big hurdle encountered when attempting to make big changes – “The feedback loop, the thermostat that exists in every organization to maintain normalcy and stasis against a changing environment.  You’re trying to enact change against organizational processes that have evolved to specifically minimize change.”

That post discussed making big changes, where, against the tide of homeostasis you push hard and go long and hope that the initial result lands somewhere close to your goal.  But what about small changes and continuous improvement?  How can you hope to get incremental change to stick when all your basic organizational processes are programmed to resist and expel the invading virus of change?

Through the process of evolution.

Evolution through natural selection has two required elements.  The first is variation, a diverse assortment of characteristics and processes that can be acted upon by selective pressure.  That’s your INNOVATION, which I leave to you (and why innovation is so important for all organizations – whether or not you see yourself specifically as a product innovator, you still need internal process innovation regardless, or you will go extinct).  The second element is environmental pressure, something to do the selecting among the variations/innovations, something that rewards fitness.

These environmental factors are nothing more than my list of levers for losing cultural weight, but employed now in the service of continuous improvement:

  • Organizational structure and design
  • Rewards, incentives, recognition and performance management / metrics
  • Tools, resources, systems, data and processes
  • Hiring / selection / training / orientation
  • Leadership / stories / heroes / values / communication

In order to support continuous improvement, the idea would be to internally develop not just a one-time set of levers to be utilized against a single, big strategic objective, but to establish something like an Office of Environmental Pressure with the objective of identifying the targets, levers and incentives across the organization that you want subject to continuous improvement. Likely targets might be:

For just this once I won’t excoriate you for navel gazing and lack of external data and benchmarks, because the data you need to support continuous improvement is already in-house. You are of course going to want to run some analytics against that all that data, explore it, visually, to see where the insights and connections and correlations are.

  • What functions or processes are ripe for improvement? (an activity-based approach wouldn’t hurt here)
  • What factors are correlated with Quality (or time-to-market, or cycle time, or service level, etc …)?
  • Instead of measuring the same thing three different ways, what’s the single best metric?
  • Which levers are best associated with which behaviors?
  • Which incentives / rewards are most effective?
  • As with your homeostatic Subconscious Metrics, which metrics are the ones everyone (by function / role) should be monitoring for Continuous Improvement?

As a nation’s leadership and policy comes from the top, so too does corporate strategy and vision.  But just as Stalin’s Central Party Committee could not will tractors into existence against the reality of the market, neither can the office of the CEO micro-manage the continuous improvement of their organization.  If instead they approach the challenge by directing the evolution of the business via selective pressure in the desired direction, progress can be made.

The most important point Tim Harford makes in his TED talk (above) is about the success of a complex system.  Just as most mutations are detrimental to an organism, a heavy-handed, top-down approach to change is more likely to cause damage than improvement.  The law of unintended consequences: fix one problem only to have that fix create three more.  Incent one group to improve their performance and they’ll do it at the cost of overall organizational efficiency and fitness.  But by allowing the organization to steadily evolve under pressure, it can work out for itself the myriad of interconnected kinks and links among processes and functions, and emerge holistically more fit to compete and perform than before.

Post a Comment

Standard Cyborg: 3D printing on the brink

3D printing stands at the threshold of crossing the chasm, with Standard Cyborg a likely exemplar of how it may soon go mainstream.  Standard Cyborg is the second company to be founded by Jeff Huber, friend and roommate of my oldest son when they were undergraduates (Jeff’s first company, Knowit, an online education company failed to catch on, because, in Jeff’s words, "We overestimated the percentage of autodidacts in the United States.")

Currently on a semester break in his graduate program, I just put my son Garik on a plane to San Francisco last week to assist Jeff with his startup, where, as employee #2, and depending on how successful their efforts are, he plans to stay for either two weeks, four weeks, eight weeks, or two years, with that last one to be interpreted as, ‘It’s been so wildly successful that I’m putting off my PhD research for a year’.

standard cyborgWhile you can read more about Jeff’s company here (“Standard Cyborg”) and about Jeff personally and professionally here (‘Fast Company article’), this is the story in a nutshell.  Jeff is an amputee, having lost his left leg below the knee as an infant.  Jeff’s “walking leg” is a technological masterpiece that comes with a price tag of $20-$25K, not something you want damaged by water (“running legs” can run two to four times as much).  But neither do you want to give up on otherwise regular activities like walking on the beach with your children, or standing by yourself in the shower (try standing in the shower on one leg while shampooing your hair some time – Jeff manages by wedging himself into a corner).

There are over 2 million amputees in the United States and over 10 million worldwide, with 70% of these being lower limb amputees.  The two primary causes in the developed countries are vascular diseases / complications, and diabetes, with the latter steadily increasing as an overweight and over-sugared population ages.  Globally, other factors come into play, including war and land mines.

Jeff’s approach to this need, an affordable ($499), custom 3D printed composite “water leg” as he calls it, is exactly the type of promise 3D printing is meant to fulfill.  It is a big market, it’s a finished product market, not just the prototyping that has dominated the field up until now, it exploits materials (composites) beyond basic thermoplastics, it further exploits the customizable aspect of 3D printing necessary for a comfortable and functional fit for each particular amputee, and can go mainstream more quickly than the still highly technical niche of 3D bioprinting (i.e. skin, ears, heart valves, livers, etc …).  Standard Cyborg with its water leg represents 3D printing’s best shot yet at crossing the chasm.

Before Garik left we discussed what he’d specifically be doing, which, other than “everything”, will be primarily focused on the design elements of structural integrity that his material science engineering education has eminently equipped him for, and marketing, for which he is totally unprepared.

My marketing advice was for him to keep it simple at this point.  The best model to follow is probably the five stages of 1) Awareness, 2) Research, 3) Evaluation, 4) Closing the sale, and 5) post-sale service, retention and repeat business.

The web site should focus on stages 2 and 3.  Nowadays, whether it’s B2B or B2C, something like 60% of your potential customers have already done their research and evaluation online before they ever contact you or you ever contact them.  For companies larger than Jeff’s, this means shifting the focus and content of your marketing automation from simply stage 4 coupons to a programmed series of communications aimed at their research and evaluation needs as part of their overall digital experience.

For Standard Cyborg, getting those crucial, initial customer testimonials and stories will be worth their weight in 3D printed composite materials.  Whether you are small, large or a transnational behemoth, it’s always more effective to get your customers to tell their, and your story for you than for you to be left quoting yourself.

Stage 4 for a small business is about making yourself easy to do business with, whether that be ordering, contracts, payment or delivery.  How easy are you to do business with?  Or shouldn’t I ask?

Stage 5 service and retention is going social and online even more so than pre-sales research.  It’s where multichannel engagement of the individual customer often first comes into play, and therefore where your data integration efforts payoff by keeping the process seamless and silo-free from the customer perspective.  Stage 5 is also where contextual analysis of channel and social media data can yield important insights regarding quality, after-market service, design, loyalty and retention.

I have deliberately skipped over commenting on stage 1, awareness and discovery, as it is clearly the most difficult aspect of marketing for any startup, but like it’s counterpart in sales, the cold call, is nonetheless indispensable.  The internet makes stages 2 and 3 easier for everyone, startups included, as it means you don’t have to invest in one-to-one resources for the task – if you make it easy for your customers to find what they need, they’ll do the rest for you.

If anything, however, the internet with its inherent “noise”, actually makes attracting the attention of your potential market harder.  Every day you are bombarded with hundreds of emails and thousands of advertising impressions and Tweets.  Analytical awareness and the value it can add (hence the title, "Value Alley") is what drives me to write this blog, but some days I think I must be nuts - there are 150 million bloggers out there!

Rather than shouting louder amidst the crowd, segment and target your awareness campaigns just as carefully as your research and evaluation messages.  For a startup that can't initially afford Google's search terms, and even for those that can, the key of course is leveraging the base of an inverted pyramid of influencers and innovators.  Easier said than done, but if Standard Cyborg, or you, can execute, you may find yourself leading an entire industry segment across the chasm.

Post a Comment

Creating value on the IoT – It ain’t about you

The Internet of Things is going to be driven by innovative business models as much as by innovative technology.  In order to ground the following discussion, I found it helpful to create this visual depiction of the IoT that defines and distinguishes the key elements that enter into these business models.  My simplified definition includes these six elements:

  1. The network backbone
  2. A server
  3. Smart devices, which I define as configurable, IP addressable devices permitting two-way communication
  4. Sensors, which although IP addressable, are not significantly configurable and allow for only one-way traffic back to the server
  5. That data generated by these elements which travels over the network
  6. Third party / cloud connections to the network; in other words - everything else

IoT2

Business models involve both value creation and value extraction, and it is important to at least recognize that there will inevitably exist a category of “rent seeking” business models that create no reciprocal value.  These are largely the infrastructure components whose real value is primarily defined by ‘capacity’, such as network hubs / platforms, network pipe and switches, and the Last Mile, where they all seek to extract value from the IoT by virtue of their position as chokepoints.   While these may initially pass as viable business models, I expect most to eventually succumb to market and regulatory forces.

Having gotten that unpleasantness out of the way, let’s turn our attention to the business models that create value via their “Things”.   The fundamental case that kicks everything off is of course that of providing and owning the server, a device, and the data generated between them; a straight forward, one-to-one relationship.  After that, everything else flows through the Third Party / Cloud component:

  • How do I add value to the device, or to the server?
  • How do I add value to the data (i.e. Analytics)?
  • Can I connect additional devices that add value to the existing device / server?
  • How can this data add value to some third party business process?

That’s pretty much about all there is to the IoT.  Piece of cake, right?  There’s a lot more detail to be explored beneath each of these aspects of course , but this simple framework should at least provide you with a starting point for brainstorming where you might want to play in the future of the IoT.  Here are four great resources / articles for digging further into those details:

One obvious consideration is your ability to access the data and devices.  Can you get access to the data, and at what cost?  Can you get access to a configurable device, and if so, can you voluntarily reconfigure it?  The flip side of this is security - If you are a device/server/data owner, can you protect your data and your smart devices from involuntary reconfiguration (i.e. hacking)?

Beyond that, the salient fact that should jump out at you is that there are infinitely more ways to add value via the network / cloud / third parties / connections / additional devices than through the direct device-to-server connection.  I flirted with this point in this previous post, “The Value is in the Network”, and I would reinforce that the devices are not the endgame, the IoT is not the endgame, even the customers are not the endgame - the Ecosystem is the endgame.

My emphasis in that previous post was around monitoring the network and enhancing your data management / integration/ exchange capabilities across that network. The IoT raises the bar to from simply monitoring to “managing” your network, actively managing your ecosystem, cultivating partners whose devices, servers and data and add value to your own, and vice-versa.   On the IoT, the sum is greater than the parts.  If in your business model 1+1+1 only equals 3, you are quickly going to find yourself pushed aside by an ecosystem where the sum comes to 4 or 5.

For better or for worse, the smartphone is becoming our remote control for life.  But it’s just a remote.  The value is in the content, and the content is coming from all corners.  If you are an IoT player, it isn’t even 'remotely' about you anymore. But it is about you AND your friends.  Successful IoT business models will come down to playing well with others.  Rather than hunkering down behind your IoT firewall, get out there and make friends, starting with making it easy for potential friends to play with you.

Post a Comment

Diagnosis: Your data is not “normal”

“Let’s assume a normal distribution …”  Ugh!  That was your first mistake.  Why do we make this assumption?  It can’t be because we want to be able to mentally compute standard deviations, because we can’t and don’t it that way in practice.  No, we assume a normal distribution to simplify our decision making process – with it we can pretend to ignore the outliers and extremes, we can pretend that nothing significant happens very far from the mean.

Big mistake.

There are well over a hundred different statistical distributions other than “normal” available to characterize your data.  Let’s look at a few of those other major categories that describe much of the physical, biological, economic, social and psychological data that we may encounter as part of our business decision and management process.

Risk%20mgmtThe big one when it comes to its business impact is what is commonly known as the “fat tail” (or sometimes, “long tail”).  These are Nassim Taleb’s “Black Swans”.  In the real world, unlikely events don’t necessarily tail off quickly to a near-zero probability, but remain significant even in the extreme, and as Taleb points out, become not just likely over the longer term, but practically inevitable.  It is these fat tail events that leave us scratching our heads when our 95% confident plans go awry.

image63Next up are the bounded, or skewed distributions. Some things are more likely to happen in one direction than in the other.  Unlike with a normal distribution, the mode, median and mean of a skewed distribution are three different values.  ZERO represents a common left-hand bound, where variables cannot take on negative values.  Many production and quality issues have this bounded characteristic, where oversize is less common than undersize because you can always remove material but you can’t put it back on (additive manufacturing excepted).  Too large of a part will sometimes simply just not fit into the tool / jig, but you can grind that piece down to nothing if you’re not paying attention (I have a story about that best saved for another post).

scilab-examples-010Discrete or step-wise functions might describe a number of our business processes.  We make a lot of yes/no, binary, or all-or-nothing decisions in business, where the outcome becomes either A or B but not a lot in between.  In these cases, having a good handle on the limited range over which making an assumption of normality becomes important.

 

325px-Poisson_pmf_svgPoisson distributions.  These describe common fixed-time interval events such as the frequency of customers walking in the door, calls coming into the call center, or trucks arriving at the loading dock.  Understanding this behavior is critical to efficient resource allocation, otherwise you may either overstaff, influenced by the infrequent peaks, or understaff without the requisite flexibility to bring additional resources to bear when needed.

 

325px-Exponential_pdf_svgPower laws.  Would you think that the population of stars in the galaxy follows a normal distribution, with sort of an average sized star being the most common?  Not even close.  Small brown and white dwarfs are thousands of times more common than Sun-sized stars, which are tens of thousands of times more common than blue and red giants like Rigel and Betelgeuse.  Thank goodness things like earthquakes and tornados follow this pattern, known as a “power law”.

2000px-Barabasi-albert_model_degree_distribution_svgMuch of the natural world is governed by power laws, which look nothing at all like a normal distribution.  Smaller events are orders of magnitude more likely to occur than medium sized events, which in turn are orders of magnitude more likely than large ones.  Power laws grow exponentially in hockey stick fashion, but are typically displayed on a logarithmic scale, which converts the hockey stick into a straight line (left). Don’t let the linearity fool you, though – that vertical scale is growing by a factor of ten with each tick mark.

Brunswick stock price chart2That’s financial data over there to the right – can you tell without the axis labels if that’s monthly, hourly or per-minute price data?  Or, it could just as easily be your network traffic, again measured by the second or by the day.  This type of pattern is known as fractal, with the key property of self-similarity: it looks the same no matter what scale it is observed at.  Fractals conform to power laws, and therefore there are statistical approaches for dealing with them.

One piece of good news is that when it comes to forecasting, you don’t have to worry about normality - forecasting techniques do not depend on an assumption of normality. Knowing how to handle outliers, however, is crucial to forecast accuracy.  In some cases they can be thrown out as true aberrations / bad data, but in other cases they really do represent the normal flow of business and you ignore them at your peril.  In forecasting, outliers often represent discrete events, which can be isolated from the underlying pattern to improve the baseline forecast, but then deliberately reintroduced when appropriate, such as holidays or extreme weather conditions.

What we’ve just discussed above is called data characterization, and is standard operating procedure for your data analysts and scientists.  Analytics is a discipline. One of the first things your data experts will do will be to run statistics on the data to characterize it – tell us something about its underlying properties and behavior – as well as analyze the outliers, all part of the discipline or culture of analytics.

Economists like to assume the “rational economic man” – it permits them to sound as if they know what they are talking about.  Likewise, assuming a “rational consumer” (customer data is going to comprise a huge chunk of your Big Data) who behaves in a “normal” fashion is pushing things beyond the breaking point.  While plenty of data sets are normal (there are no humans ten times the average height, let alone even twice), don’t assume normality in your data or your business processes where it’s not warranted.

Soon enough we’ll probably drop the “big” from Big Data and just get on with it, but still, your future is going to have a LOT of data in it, and properly characterizing that data using descriptive analytics in order to effectively extract its latent value and insights will keep your Big Data exercise from turning into Big Trouble.

Post a Comment

Transformations – Personal and organizational

A new year, and with it comes reflection and resolutions.  While few resolutions are actually kept, change comes anyhow.

freytag2I was reminded recently of a conversation I once had with a high school classmate who I had hardly seen since graduation.  We were discussing a third person, and my friend’s comment to me was: “I didn’t know him very well.  And for that matter, I can’t say I know you very well now, either.”  She was of course making the point that, with time, we all change.

And thank goodness is all I can say.  Not only am I not the person she once knew when we were both 18 and on our way to college (that Leo had, shall we say, some developmental opportunities ahead of him), by my reckoning I am currently working on Leo version 7.0, counting from my first, stable, young adult personality at age 15, and am still a work in progress.

My first four versions came in fairly quick succession between the ages of 15 and 28, followed later by longer, more stable periods.  If I had to summarize my experience of these transformations, it would be:

  • A series of relatively impactful events and environmental changes occur (A, B, C, D, E, F, …)
  • Followed by a specific trigger event “X”.
  • The trigger event highlights certain previous life events and gives them significance. While Trigger event X might spotlight events A, B and C, a different Trigger Y would perhaps have selected events D, E and F as being the important precursors.
  • The transformation is not a single moment, but encompasses a period of time on either side of the Trigger, and is often not apparent until some time has passed for reflection and assessment.
  • The transformation is a response to environmental stress, and enhances your physical, psychological and financial competencies for survival in light of that stress.
  • The transformation requires facing fears and taking risks.
  • In retrospect, the transformation looks like a typical story/plot outline, starring you as the protagonist.

Over a period of several months I continually revised my assessment of my transformations. It took me a while to settle on not just seven, but these particular seven, relegating some previous Triggers to mere events while recognizing other events as being the true Triggers and accordingly shifting the time periods in question.

The criterion I settled on for defining a transformation was:  Would I now (or my previous personalities) be willing to go back to being that person?  For example, while I would have little consternation going back to the Leo I was four years ago, that is not the case for the self I was twelve years ago – too much has been learned since then to voluntarily give it back, no matter the price I may have paid for it.  (Not all transformations can be considered positive, but I’m going to leave retrograde motion out of this discussion).

While I might like to be able to claim that I reinvented myself six times over, that would not only be stretching the truth, but more like misremembering and misrepresenting the past.  While I did get better over time at re-engineering each new version, none of the seven Triggers or transformations were deliberate on my part, but merely reactions to changes in myself and my environment.  Life was forcing my hand, not the other way around.

This leads to my first proposal:   We all need to more proactively manage our lives and transformations, and to that end, a life or career coach or mentor is probably not a bad idea.  Someone objective, someone with a broader perspective on the world than we might have, someone to occasionally shake us out of our comfort zone, but as part of a proactive plan instead of a reactionary Trigger.  Considering the increasing pace of technological and cultural change, this is more necessary today than ever.

I had a coach early in my career, but I think her contribution was more in the direction of stability than transformation, which as a new parent was probably exactly what I and the new family needed at the time.  However, I do wish now that I had continued to work with her – there was no need for that fifth transformation to have waited 14 years to commence.

My second proposal is that, following this model, organizations are probably in a better position to proactively trigger transformations than are individuals.  Organizations are much better suited to develop and compartmentalize the capability to objectively analyze itself, and then provide the incitement to change.  If not internally, this capability can also be readily acquired externally via change management consultants.

An entirely reasonable organizational approach to change would be to replicate the individual process by deliberately creating the preparatory, foundational precursor events A, B and C (the ‘rising action’), then instigating a Trigger (the ‘crisis’), followed by events D, E and F (the ‘denouement’) which completes the story of the transformation and becomes the new context in which the organizations understands itself and its mission.

Two factors are primarily responsible for the lack of both organizational and personal transformation.  The first is the lack of a vision, the lack of the transformative storyline / myth / context that I proposed above.  In an organization this is the job of the CEO; as for an individual – this is why the use of a career/life coach or mentor can be so beneficial.

The second factor is fear and risk.  For an individual the risk is typically emotional or financial.  For an organization not in financial straits, the analog to the individual’s psychological risk would be the lack of a well-defined strategy.  You know you need to be on the opposite river bank, and that the only bridge is weak and deteriorating and won’t be there much longer, but you hesitate because the other side is unknown territory.

One approach some organizations take is to spin-off their fearless, agile component and let them lead the way without the baggage of the larger organization.  Another approach is to hire a CEO or other talent with experience on the other side.  Or, you could scout the new territory, often with the help of outside consultants who have experience in that terrain, or utilize insights gleaned from your current business intelligence database.

Lastly there is the approach I discussed some time ago (“Having a strategy versus being strategic”) of simply making the commitment, crossing that river first and allowing your strategy to develop over time once you’re there and can make refinements based on real data rather than speculation.  As I admitted in that previous post, I am not necessarily comfortable with the idea of strategy as simply the sum of my tactics, but sometimes that approach may be just what’s called for.  If your future is on the other side, whether that be the love of your life and future spouse, or because technology is making your industry / market / business model rapidly obsolete, sometimes you just need to face your fears and make the leap.  On a personal level this is similar to the behaviorist approach of inverting the "Beliefs ---> Attitudes ---> Behaviors" model, and simply changing your behavior and letting your beliefs and attitudes catch up later.

Regardless of how you get there, personally or organizationally, eventually you ARE going to end up on the other side of that river, with many more rivers to cross in your future after that.  The question is:  Will you cross unwillingly and unexpectedly because the bridge is burning or the ground you’re standing on has given way, or will your transformation be a more deliberate affair, part of a purposeful journey or quest rather than a flight of necessity?

Post a Comment

Getting started with Supply Chain Segmentation

All unsuccessful segmented supply chains are alike; each successful supply chain is successful in its own way.” ― Leo Tolstoy Sadovy

Segmentation is the new big thing in supply chain management, or at least it’s an old big thing made new again.  It was the keynote topic at last month’s IE Group Supply Chain Summit in Chicago, and is typically addressed by at least a couple of speakers at every supply chain conference I’ve seen lately.

segmentation12The complexity of customer expectations and service levels, your product portfolio, the global supply chain, varied distribution channels, coupled with the internet and social media, makes moving from an undifferentiated to a segmented supply chain almost an imperative, even though doing so adds a layer of complexity that many manufacturing companies are not ready for.  To read the recent literature on the topic, when you start trying to combine segmentation based on your products with segmentation based on your customers, it goes from merely complicated to overly complex in a heartbeat.

Here’s a short list of just a few of the various segmentation strategies and permutations to consider:

  • Product-driven segmentation:
    • Large volume, long production runs, standardized operations
    • Limited editions, fluctuating demand
    • Made-to-order, low volume, short runs, high margin (high cost-to-serve?)
  • A volume / variability 2x2 matrix
    • High volume commodities
    • High volume seasonal or promotional items
    • Low volume, predictable
    • Low volume specialty or custom orders
  • A typical three-segment retail-oriented model:
    • Regular replenishment
    • Seasonal, but predictable demand (swimwear, lawn fertilizer)
    • Volatile, one-off demand (fashion, new products, promotions)
  • Customer-based segmentation – many ways to do this:
    • Standard, higher quality, or premium service / customization
    • By channel
    • By lead-time service level (build-to-stock, configure-to-order, build-to-order)
    • By customer size, volume or value
    • Other customer characteristics, such as vendor managed inventory, level of data and forecast/POS integration / collaboration, SLA penalties or geography
  • Risk-oriented segmentation, based on political, environmental or economic risk/disruption factors, and on product lifecycle stage considerations

I am a practical sort, concerned primarily with execution.  I want to make Pareto’s Law work for me and go after the low-hanging 80% that only requires 20% of the effort, and I want that first demonstrable success.  Lastly, I would be well advised to dust off the old adage – keep it simple, stupid – and that list of possible segmentation models above looks anything but simple.

The conference keynote case study mentioned above concerned a multinational alcoholic beverage company that was trying to balance the production needs of large volume, stable, established brands with the flexibility needed in a surprisingly innovative market that sees several hundred new products introduced every year.  Their big breakthrough was to move from a one-plant/one brand, one-line/one-product practice (largely inherited via multiple acquisitions over the years) to an agile approach where each line in each plant could handle any combination of product, bottle, label or packaging.  For example, before the changeover, there were some labels that had to be spun on clockwise, and other labels counterclockwise, which just by itself cut the number of available production lines in half.

With that in mind, and based on the success stories and key takeaways I’ve seen presented or in print, I think I’d approach my first supply chain segmentation project in the following manner:

  1. Get a good understanding of my cost-to-serve.
  2. Employ analytic forecasting.
  3. Take a product-oriented approach to the supply chain segmentation.
  4. Deal with my customer segmentation opportunities via inventory and service policy.

Breaking these down a bit further:

  1. Cost-to-serve. Before I do anything, I want accurate product, process, customer and channel costs on which to base my decisions, informed by a cost and profitability management solution that gives me cost output I can trust.
  2. Analytic forecasting. Because it all starts with the forecast. It can only get worse from there. Start higher in order to finish higher.
  3. Product-oriented approach. Yes, it’s inside-out thinking, but it seems to be where all the successful segmentation projects started from. It’s easier to understand and control than either working back from the customer or trying to bite off the entire holistic supply chain in one go.
  4. I’m still going to have to deal with customer and channel differences. What if a high-value customer wants a low-value product? We all know how that story ends – Lola gets what Lola wants. I need to accommodate my premium customers through some post-production combination of inventory policy, customer service/care, and order allocation/commitment process.

I can, however, imagine several scenarios where I might have to start from the customer and work backwards, such as having the federal government as a customer (where mil-spec products might necessitate a holistic supply chain approach all the way back to the farthest tier-n supplier), or when you have significantly different classes of customers who buy through distinctly separate channels. But for all practical purposes, you aren’t going to get one specific segmentation scheme that meets both all of your operational priorities and all your high-priority customer needs (and mitigates all your major supply chain risks).

One final bit of advice from the experts can be summed up as:  One physical supply chain with multiple virtual segmented supply chains running against it.  These virtual supply chains are distinguished by policy, not by brick-and-mortar – inventory, sourcing, production, fulfillment, logistics and service policies.  Because it’s easier to change policy than to change concrete and steel.

As nearly every supply chain expert stresses, one size does not fit all.  You need to select a segmentation strategy that’s right for your business.  But please do select just one appropriate strategy, not some unworkable hybrid. Unsuccessful supply chains are alike in that they tend to be more complex than they have to.

Post a Comment

Big Silos: The dark side of Big Data

big-data-image3The bigness of your data is likely not its most important characteristic. In fact, it probably doesn’t even rank among the Top 3 most important data issues you have to deal with.  Data quality, the integration of data silos, and handling and extracting value from unstructured data are still the most fertile fields for making your data work for you.  [And if I were to list a fourth data management priority it would be, as I described in this previous post (“External data: Radar for your business”), the integration of external data sources into your business decision support process]

Data Quality:  The bigger the data, the bigger the garbage-in problem, which scales linearly with data volume.  Before you can extract value from the bigness of the data, you need to address the quality of the data itself.  If you haven’t been employing robust, scalable data quality tools, now would be the time.

Have we gotten any better at data quality? My personal, one sample survey would indicate that we have not.  With a relatively unusual last name, Sadovy, although only six letters, I’ve seen it misspelled over two dozen different ways in my life, and I thought I’d seen them all by my mid 40’s.  But once my three children became college-aged and started receiving daily credit card offers in the mail, several new ways to misspell my name came to light, a credit to the creativity of today’s automated processing systems.  Even being a Smith/Smythe or Jones/Joens doesn’t leave you immune to a misplaced bit or byte.

Without a focus on data quality, big data just gives you that many more customer names to get wrong.

Data Integration:  If you’ve got a data silo problem, and who doesn’t, then all big data contributes to the process is to make those silos bigger.  Which makes the eventual data integration exercise that much more of a challenge.

Enterprise big data comes at you from a dizzying array of directions – from mainframes and ERP systems, from transactional and BI databases, from sensors and social media, from customers and suppliers. To make matters worse, each of these various sources and applications has its own, sometimes proprietary, data model.

And we’re still not finished with the complexities of this issue yet, because enterprise data has one more endearing quality that makes integration difficult – it’s decentralized and distributed. Extracting value from its bigness by creating one humungous centralized, homogeneous data warehouse is simply out of the question.  If Sartre had been a philosopher of data science he might have said, “Integration precedes value extraction”.

Unstructured Data:  Depending on what study you prefer, it’s claimed that 70 to 90 percent of all data generated is unstructured.  This unstructured bigness doesn’t readily fit into predefined columns, rows, data entry or relational database fields.  Customer feedback, emails, contracts, Web documents, blogs, Twitter feeds, warranty claims, surveys, research studies, client notes, competitive intelligence, often in different languages and dialects … the list goes on. Who has the time to read all this, let alone find an efficient way to extract the latent value from it?

Unstructured data may be both big and bad, but again, with the right tools, it’s not unmanageable. Text mining, sentiment analysis, contextual analysis – there are automated machine learning and natural language processing techniques available today to deal with the volume and ferret out the insights.

Big Data’ is of course a relative term, but when I think ‘big data’ one of the following three data categories seems to be in play:

  • High transaction volumes: Millions of customers, billions of transactions (i.e. ATMs or POS), or tens of thousands of SKUs crossed with other attributes such as retail locations, cost and/or service levels.
  • Temporally dense: Sensor data, audio.
  • Spatially dense: Video, satellite imagery.

The business issue becomes – what do you want to do with all this data? And the place to start is not with the data, or with its bigness, but with the business problems you want to solve, the business insights you want to gain, and the business decisions you want to support.  Starting from there and working backwards to the data means running squarely into the issues of data quality, data integration and unstructured text analytics.  It’s only after you get a handle on this trio of capabilities that you can begin to effectively tap the big data spigots.

Extracting tangible value and insights from high-quality, integrated data, no matter its volume, velocity or variety, is where the payoff lies. Getting to this payoff in an environment where your data is growing exponentially in all dimensions requires an investment in robust data management tools. The consumers of this data, the business users, don’t know or care about its bigness – they just want the right data applicable to their particular business problem, and they want to be able to trust that data. Trust, access and insights – it’s got “quality” and “integration” and “analytics” written all over it.

Post a Comment

Customer Relations by Walking Around

Const1Perhaps nowhere is the saying “time is money” more true than in the construction industry.  There is no better indicator of project cost and budget over/underrun than the number of days on-site.  Reducing that number has a near 1:1 relationship with cost cutting, so it’s no wonder that days on-site is the most watched project metric.

Further complicating matters, the construction industry is well-behind the 3D adoption curve, still relying primarily on 2D blueprints when most other manufacturers have long since moved to 3D CAD-CAM design and production systems, despite the obvious benefits of the application of 3D systems to the construction of 3D physical structures.

Stepping into this breach is Nancy Novak, Vice President of Operations for Balfour Beatty Construction services, a speaker at the IE Group's Manufacturing Analytics Summit earlier this year.  Nancy specializes in applying off-site manufacturing (OSM) techniques to large commercial and industrial projects – one of the most innovative process to recently emerge in this industry.

Or maybe I shouldn’t call it a process, as much as OSM’s intent is to productize the construction industry, to allow it to standardize and reap the benefits of common manufacturing techniques and processes that have been around for decades.  The benefits of OSM include:

  • Faster – Fewer days on-site with a more predictable schedule  (i.e. lower cost)
  • Safer – Less on-site labor, better site logistics
  • Better quality, with a more predictable product

This is the story that Nancy brings to her potential clients.  With each project, she explores with the construction team the possibilities for modular systems that can be assembled off-site and then integrated into the larger structure on-site, whole and in working order, such as bathrooms, kitchen facilities, elevators and staircases, office space, HVAC, interior and exterior walls, and even entire living suites for apartment complexes.

Perhaps the most surprising aspect of her work is how often the client informs her that she is the first person who ever proposed such an approach to them, how often she is the first person to suggest that they take a walk through a current project to assess what improvements might be able to be incorporated into the next one.  Not so much management-by-walking-around as sales, or customer relationships, by walking around.

This is an easy lesson to apply to your own highly-competitive manufacturing business.  Are you tired of the price wars?  Are you looking for a differentiator other than features, functions and performance in a largely mature market?  Are you interested in taking need-based, consultative selling to the next logical level?  Then instead of making the focus of your next customer visit your own products and services, simply ask for the opportunity to walk around their business environment and ask “what if”?

Many your customers will of course have well-defined problems with straight forward solutions, where the the only obstacle is budget, but it’s more likely that their needs and problems are much more nebulous or even completely hidden.  As Henry Ford once famously quipped, “If I had asked people what they wanted, they would have said faster horses.”  Often they are looking for you to be the expert, or, as we often say here in the world of SAS analytics, “tell me something I don’t know”.  To get to the answer, first you need the insight.

I couldn’t possibly list here all the insights you and your customer might uncover, but just to give you a flavor for the types of questions to ask:

  • What can we do regarding custom packaging / logistics that would better suit how you use our product?
  • What services might better be provided on-site or mid-stream rather than all before or after delivery?
  • What if we could manufacture the product in multiple components (or singularly) for easier installation / service?
  • What integration could we be doing with your other suppliers before our product ever gets to you?

If all goes well, this inevitably leads to a discussion around where BOTH parties are making changes to their products and processes to reduce the total overall cost and/or to otherwise make the total end product more competitive. Not just, “What can I do for you?”, but “What can we do together?” The proverbial yet rarely seen "win/win".  Getting to this level of conversation is the best differentiator you could ever have.  You are no longer just a vendor, nor even a ‘strategic supplier’, but a real business partner.  You are no longer competing on price against a dozen other contenders, but are now critical to making your client more competitive in THEIR market.

So what are you waiting for?  Go for a walk – it will be good for you, … and your customer, …and their customer.

Post a Comment

Analogies, mind-mapping and New Product Forecasting

There are two ways you can react to a “Hey – that was my idea” situation.  The first would be to throw a pity party and lament about how unfair life is – if only the car hadn’t broken down and I didn’t have grass to mow and laundry to do I could have filed a patent and been a millionaire by now.  The other is to recognize that you were never going to do anything of the sort under any conditions anyhow and simply take the experience as confirmation bias of how brilliant you are.

1363280485Hofstadter-Surfaces_andWhen I came across Pulitzer Prize-winning author Douglas Hofstadter’s latest work, “Surfaces and Essences: Analogy as the Fuel and Fire of Thinking”, I chose the latter course.  His core theme is that analogies lie at the heart of how we develop concepts, how we construct language, how we understand the world, how we think – something I not only heartily agree with, but a concept I considered myself decades before Hofstadter’s book.

Among other things, I fancy myself an amateur student of language.  You see, as a parent, by necessity you become an amateur student of a whole host of subjects that previously may have never interested you.  For example, I find that parents are three times more likely than non-parents to know that Michael Crichton got it wrong: T-Rex is from the Cretaceous, not the Jurassic, and I have the short video, “Cretaceous Park”, to prove it, created by my then five year-old son, who produced it in order to set the record straight among his kindergarten classmates.

Likewise, as a parent you also quickly become an expert in the field of linguistics as you watch in amazement the literal explosion of language once your children have mastered a basic vocabulary.  They start with what they have in their linguistic toolkit and build on it, making telltale mistakes along the way that shows how their mind is working, such as undressing a banana, or cooking water, or breaking their book, before they’ve learned the verbs peel, boil and tear.

Metaphors come next and get incorporated into the very meanings of words: tables have legs, bellies have buttons, and airplanes get tails.  The analogies get more complex over time, as we encounter windows of opportunity, haunting melodies and watertight reasoning.  These later develop into idioms where sometimes the analogy is still clear, as in ‘bend over backwards’, ‘between a rock and a hard place’ or ‘stacked the deck’, and others where the root is barely discernable, such as ‘kick the bucket’, ‘egg someone on’ or to ‘give someone short shrift’.

One thing I instinctively knew about myself at a young age was that my preferred learning style was by analogy and via storytelling.  Rather than feverishly trying to scribe every single detail into my notes as the teacher or professor spoke, I saved those for later (an especially useful approach in today's Google-age) and focused on relating the main and secondary concepts with each other and with what I already knew, working them into my existing knowledge framework and creating a new, expanded or more complex story about the subject for myself.  I was mind mapping, or concept mapping as I thought of it, way before it became a thing.

This concept of analogies is what lies behind SAS’ New Product Forecasting solution.  New product forecasting (NPF) can be a recurring challenge for consumer goods and other manufacturers and retailers. The lack of product history or an uncertain product life cycle can dampen the hopes of getting an accurate statistically-based forecast.  Here are some of NPF situations you might encounter:

  • Entirely new types of products.
  • New markets for existing products (such as expanding a regional brand nationally or globally).
  • Refinements of existing products (such as new and improved versions or packaging changes).

SAS’ patent-pending structured judgment methodology helps you automate the evaluation and selection of candidate analogous products, facilitates the review and clustering of previous new product introductions, and generates statistical forecasts. This structured judgment approach uses product attributes from prior and new products, along with historical sales, to create analogies.

The use of analogies is a common NPF practice. You can see it, for example, in the real estate market, where an agent will prepare a list of “comps” – similar houses in the area that are on the market or have recently sold – and use this to suggest a selling price.

The structured analogy approach requires two types of data – product attributes (for prior and new products) and historical sales (for prior products). Product attributes can include:

  • Product type (toy, music, clothing, shirts, etc.).
  • Season of introduction (summer item, winter item, etc.).
  • Financial (target price, competitor price, etc.).
  • Target market demographic (gender, age, income, postal code etc.).
  • Physical characteristics (style, color, size, etc.).

The statistical forecast is then built using a structured process based on defining and selecting candidate surrogate products and models.  Furthermore, you can combine this with data visualization to study previous new product introductions to gain a better sense of the associated risks and uncertainties.

Candidate products2

Using analogies to improve your forecasting should not seem at all foreign – you’ve been using analogies since you were a toddler to expand your knowledge base by connecting and building on what you already know.  To find out more, check out this white paper, “Combining Analytics and Structured Judgment: A Step-By-Step Guide for New Product Forecasting”, and learn the details of getting from A to B, of getting from the product history you know to the new product forecast you don’t.  You might call it mind-mapping for your new product forecast; See - analogies are everywhere!

Post a Comment

CaaS – Crime-as-a-Service: Murder on the Internet of Things

Europol, the law enforcement agency of the European Union, in its recently released 2014 Internet Organized Crime Threat Assessment (iOCTA), cited a report by U.S. security firm IID that predicts that the first “online murder” will occur by year end, based on the number of computer security system flaws discovered by hackers.

pacemaker2While there have been no reported cases of hacking-related deaths so far, former vice president Dick Cheney has had the wireless function on his implanted defibrillator disabled in order to prevent potential hackers from remotely accessing his device. Just such a scenario was played out fictionally in the political TV thriller Homeland, in which his counterpart was murdered by terrorists who were able to hack into the (fictional) vice president’s pacemaker.  In an interview last year, Cheney said, “I was aware of the danger that existed and found it credible – [the scene in Homeland] was an accurate portrayal of what was possible.”

Cheney’s fears are not unfounded in the least.  2012 research from security vendor IOActive regarding the security shortcomings in the 4.5+ million pacemakers sold between 2006 and 2011 in the U.S turned up the following:

  • Until recently, pacemakers were reprogrammed by medical staff using a wand that had to pass within a couple of meters of a patient, which flips a software switch that allows it to accept new instructions.
  • But the trend is to go wireless. Several medical manufacturers are now selling bedside transmitters that replace the wand, with a range of up to 30 to 50 feet.  With such a range, remote attacks become more feasible.  For example, devices have been found to cough up their serial number and model number with a special command, making it is possible to reprogram the firmware of a pacemaker in a person's body.  Other problems with the devices include the fact they often contain personal data about patients, such as their name and their doctor. The devices also have "backdoors," or ways that programmers can get access to them without the standard authentication - backdoors available for more nefarious uses.
  • Just as your laptop scans the local environment searching for available WiFi networks, there is software out there that allows a user to scan for medical devices within range. A list will appear, and a user can select a device, such as a pacemaker, which can then be shut off or configured to deliver a shock if direct access can be obtained.
  • As if this wasn't bad enough, it is possible to upload specially-crafted firmware to a company's servers that would infect multiple pacemakers, spreading through their systems like a real virus - we are potentially looking at a worm with the ability to commit mass murder.

Can it get worse?  By now you’ve heard of SaaS (software-as-a-service) and PaaS (platform-as-a-service), but how about CaaS – Crime-as-a-Service?  From the Europol report: “A service-based criminal industry is developing, in which specialists in the virtual underground economy develop products and services for use by other criminals. This 'Crime-as-a-Service' business model drives innovation and sophistication, and provides access to a wide range of services that facilitate almost any type of cybercrime. As a consequence, entry barriers into cybercrime are being lowered, allowing those lacking technical expertise - including traditional organized crime groups - to venture into cybercrime by purchasing the skills and tools they lack.”

Just take a moment to let that sink in: Lowering barriers to entry, criminal innovation, CaaS as a business model.  It really shouldn’t surprise us, though - criminal enterprises have been adapting the principles of sound business management from the early days of organized crime.  Did you know that the illegal drug market is a $2.5-trillion dollar industry? Not merely a billion dollar industry, it’s a TRILLION dollar industry, employing standard business school tactics such as quality control, freemium pricing models, upselling, risk management and branding, not to mention the ever changing supply chain and logistics challenges.

Cyber criminals are at least, if not more, sophisticated than the typical drug trade.

Lest you think that cyber security is primarily the province of the big banks and retailers, how your products will integrate with the Internet of Things (IoT) should make you think twice.

I went over twenty years with the same credit card number - now I have to get a new one pretty much every year because someone got hacked, and I’m guessing that your experience hasn’t been much different.  And remember, these breaches are occurring at large enterprises already employing a significantly sized staff of cybersecurity experts.

If your device is going to be on the internet, security will need to be baked into the design from the very beginning.  Again, from the Europol report:  “"The Internet of Things represents a whole new attack vector that we believe criminals will already be looking for ways to exploit. The IoT is inevitable. We must expect a rapidly growing number of devices to be rendered 'smart' and thence to become interconnected. Unfortunately, we feel that it is equally inevitable that many of these devices will leave vulnerabilities via which access to networks can be gained by criminals.”

Rod Rasmussen, the president of IID - the source of the murder prediction mentioned at the beginning of this post - had this to say: "There's already this huge quasi-underground market where you can buy and sell vulnerabilities that have been discovered. Although the first ever reported internet murder is yet to happen, ‘death by internet’ is already a reality as seen from a number of suicides linked to personally-targeted online attacks.”

While it’s unlikely that anyone will die from a stolen credit card number, that’s not going to be the case for many of the tens of billions of devices attached to the internet, from medical devices to wearables to the connected car.  As a manufacturer of current and potential IoT devices, you may not be aware of SAS’ dominant presence in the fraud detection/prevention and cybersecurity field.  When you get a call from your bank freezing your credit card and questioning that $3,500 purchase at a shopping mall in Altoona, it was likely SAS analytics behind the scenes that identified and flagged the fraud.

If CaaS is going to be part of the criminal elements’ business model, cybersecurity will need to be part of your product design and IoT business model, and SAS can help.  While your brand may survive a variety of production quality problems, it won't survive a murder on the IoT.

 

[You can also learn more about cybersecurity from Ray Boisvert, CEO and founder of I-Sec Integrated Strategies, at his presentation, “The Threat Landscape: Cyber Tools and Methods Transforming the Business Environment,” at SAS' Premier Business Leadership Series, Wednesday, Oct. 22, from 2:15 to 3 p.m. Boisvert sees cybersecurity as a task for analytics that can help organizations tease out the proverbial signal from the massive internet “noise” around serious threats. The challenge is to identify the right threat vector related to the most valued elements an organization holds dear. The organization will only be successful if it has technology to quickly digest huge streams of data, in real time, so that it may begin to see patterns that can thwart further attacks.]

Post a Comment