Cloud Encounters of the Fifth kind

Wow_signalCSETI, the Center for the Study of Extraterrestrial Intelligence, defines a “Close Encounter of the Fifth kind” as an event that involves direct communication between aliens and humans (a “Close Encounter of the Third kind” would be one in which an animated creature is present).  So I think Spielberg misnamed his movie by two whole steps.  We most definitely had some direct communication going on there, beginning with the iconic five tone sequence of B flat, C, A flat, (octave lower) A flat, E flat, progressing to the point where the technician announces, “We have a translation interlock on their audio signal – We’re taking over this conversation, … NOW!”, and the computers and the keyboard and the mothership go about their business without any further human involvement.

While it has been a couple of years since we passed the point where more than half of all Web traffic became non-human, mostly search engines, bots and spam, when it comes to the internet as a whole, video and media / gaming still holds sway at 50%+ of the transmitted bits.  The peer-to-peer segment (P2P) currently comprises about 20% of the total, largely dominated by file sharing and financial trading, but masked within it is the fastest growing component:  machine-to-machine (M2M), expected to grow by more than an order of magnitude within just the next five years.  When it comes to the Internet of Things (IOT), the future clearly belongs to the Things.

There is a well-known saying that a stopped clock is right twice a day.  I once heard the philosopher of communications Marshall McLuhan described as a clock that was only right once in a hundred years, but when he was right he was dead on (i.e. “The medium is the message” and the “global village” from “Understanding Media” and “The Gutenberg Galaxy” respectively). Ken Olsen, the founder and former CEO of Digital Equipment Corporation would probably fall somewhere in between.

Olsen is today most infamously remembered for his “snake oil” comments regarding UNIX, and, when taken somewhat out of context, his dissing of the personal computer.  I worked for Olsen for eight years, and what gets left out of his PC comments was his vision for the future of information – “information as a utility” he called it.  Combined with the standardization of Ethernet in the 1980’s, he foresaw people plugging into the wall for information just as they might plug into an electric socket or connect a home to water and gas utilities.  Thus the ubiquitous VT220 terminal of the times, the smartest dumb terminal ever made.

Olsen’s vision was spot on, but his timing left a little to be desired.  That time is drawing closer, step-by-step, piece-by-piece.  We’re seeing components, such as the Cloud, the Web, the IOT and Analytics at the Edge, along with mobile and social technologies, grow, mature, connect and overlap.  However, as the author William Gibson noted, “The future is already here — it's just not very evenly distributed”.

The Cloud is one of those currently unevenly distributed elements.  I think the word itself will go out of non-meteorological use within a dozen years or so as information becomes more of a utility and cloud computing becomes a commonplace.  Mentioning the Cloud will one day date you just as talk of rotary phones and punch cards (and VT220’s) dates you now.

Four years ago when I started chairing financial conferences for the IE Group, the primary concern when it came to the Cloud was data security.  Four years later, CFO’s seem to have become more comfortable with the security issue, and now the reluctance comes from a different quarter.  Half of the effort and cost for any large system or process re-engineering initiative occurs on the front-end: the data, documentation, admin and policy and procedure clean-up.  You just can’t throw your mess over the wall and into the Cloud and expect it to work, and neither will your Cloud partner accept it.  What I hear from these same CFO's today is that once they’ve cleaned up their act, they rather like what they see, and then proceed by keeping the back-end of the project in-house.

But in this uneven world in which we find ourselves, I think it’s going to be the security issue that actually drives businesses to deploy more rapidly to the Cloud.  The Cloud providers, of course, will all insist that since it’s your data, the data security liability is yours and yours alone, but wouldn’t you rather be behind a firewall that has TEAMS of cyber-security professionals separately dedicated to the primary security threats, experienced cyber-security teams dedicated to threats emanating from China, Russia, India or even the U.S. (“Top Ten Hacking Countries”).  With cyber threats becoming both more numerous and more complex all the time, it’s already tough enough for most firms to fill key data security roles, let alone compete with Google, Amazon, Microsoft and the NSA for the top talent.

Before information truly becomes a utility there are still some business and content creation models to be worked through, conflicting standards to be ironed out, and turf wars over gate-keeping and rent-seeking to be fought, but it does not appear that technology will be a barrier.  Brian Arthur’s “digital economy” is already here (“The Second Economy”) with the advent of large, non-financial but also non-product enterprises such as Facebook and Google.  Arthur’s digital economy continues to build out its nervous system, with the Things of the Internet not just talking to each other, but learning as they go (“Machine Learning”) – they are taking over this conversation, and they don’t care whether it’s cloudy or not.

Post a Comment

A Holistic view of Product Quality

While managing quality within the four walls of your own operation is all well and good and totally necessary, both the market and your bottom line are demanding a more holistic, quality lifecycle approach, and in support of that aim there is a treasure trove of downstream data waiting to be tapped and exploited to improve product quality and customer satisfaction.

qualitycheck(1)The impact of this downstream customer quality data can be badly described by this less-than-perfect analogy about having a baby.  It goes like this:  With little or no incoming inspection of supplier materiel, the baby is conceived and spends the next nine months in production.  Typically born in a hospital, the baby will undergo a number of invasive outgoing inspection procedures before being released for shipment.  Depending on the manufacturer’s health care plan, the baby will come with an 18 to 24 month warranty, during which time period the proud parents will make regular pediatric dealer visits for routine inoculation maintenance.  The warranty tends to expire before the (now) toddler begins to operate in less forgiving environments, with many of the post-warranty malfunctions being handled by the nearest urgent care or emergency room  ( I have three children, and have been to the emergency room four times during their childhood – all four times with the same child.  It now would appear, however, that the involuntary software upgrades (i.e. learning, experience) that accompanied these hardware failures have at last had their intended effect in ameliorating the culpable risk taking behavior).

Then they grow up, go off to college, move out of the house, and live for ANOTHER seventy years; seventy more years of additional hardware (and occasionally, software) breakdowns.  From this article, “The Eleven Most Implanted Medical Devices in America”, the top five are:  lens implants to replace cataracts, ear tubes, stents, artificial knees, and metal screws, pins, plates, and rods.  My son, the other one, the one who never went to the emergency room once despite playing lacrosse through both all of high school and college, is in grad school studying biomechanical engineering – it looks like there’s a bright future for him in either joints (knees and hips) or cardio (stents, pacemakers and defibrillators) should he so choose.

The poorly illustrated point here is, of course, that with humans, as with manufactured products, there are numerous downstream quality issues that never get reported back to original hospital / manufacturing plant.  Just as you will likely never return to the hospital of your birth for any kind of treatment, but will be treated in a variety of specialized care centers across the country or the globe, your malfunctioning manufactured products are going to be repaired at a host of dealers, retailers and repair shops both in and out of your distribution network.  Formally, back at the ranch, you are typically only going to see a fraction of all customer returns, warranty claims and product problems. This gives a false indication of customer return rates and reasons. Failure rates can differ greatly between manufacturing and customer returns. It would be like a medical / public health system ignoring the last seventy years of a person’s life after they left the care of their pediatrician.

You can see this effect especially with consumer electronics as they get more mobile and are used in environments and for purposes that were never foreseen back in the design lab.  While some types of failures, attributed to things like poor shipping or packaging, might surface quickly and consistently enough for a reliable root cause analysis and fix, other problems with the user interface might only show up after years and years of cycles operating in previously untested environments.

As you begin to tackle this, one data management issue that will become readily apparent is the need for common failure symptom descriptions across all stages of data collection. You won’t be able to diagnose the problem if everyone is describing the same thing in five different ways.

Collecting, processing and acting on this downstream data will become easier as the Internet of Things evolves into the Connected Consumer with every product communicating continuously with the mothership throughout its life, but awareness now, along with making the best of the data you currently have or can get to, can have a substantial impact on your total quality program.  Making the best of what’s available would most certainly include social media, sentiment and text analytics, where you can assess what’s being said about the quality of your product behind your back.

While it might be difficult today for a single business to justify providing financial inducements to downstream players to incentivize them to report their findings back to the manufacturer, depending on how the Internet gatekeepers of the future structure themselves, we might see the evolution of syndicated warranty / repair information service providers similar to those that operate on the POS side.  And if it’s not you taking advantage of this information, perhaps it will be one of your competitors.

 

(My special thanks to Jeff Pink, Director of Operations at ViaSat, whose presentation at the IE Group’s Manufacturing Analytics Summit this past May provided the inspiration for this topic)

Post a Comment

Activity-Based Business Process Reengineering

I want to use SAS’ recent announcement of our Cost and Profitability Management solution as an opportunity to highlight an often overlooked but valuable application of activity-based costing: business process reengineering.   But first, just a brief description of Cost and Profitability Management’s new breakthrough capability:  In-memory model calculation.

Use-Process-Management-To-Help-Your-Business-GrowSAS’ decision to rename SAS Activity-Based Management to SAS Cost and Profitability Management wasn’t just a cosmetic alteration. In-memory model calculation is a game changer when it comes to activity-based costing.  An order of magnitude performance improvement – not just 2X or even 4X, but ten times faster. Complex models that used to take an hour or overnight to update can now be run in a fraction of the time.  In order to best take advantage of this performance increase, SAS now not only provides automatic data integration with SAS Visual Analytics for immediate reporting and deeper analysis, we also added a What-If simulation capability (in-memory, of course) that lets managers play with costs and attributes to their heart’s content, saving their variants without impacting the integrity of the underlying model.

Alright, enough with the previews, let’s get on with our feature presentation.  Activity-based costing has always had the potential to add value to a wide variety of functions and processes.  I previously covered many of these in this post (“Activity-Based Management: The gift that keeps on giving”), but just to recap a few of the key points:

  • Benchmarking processes and functions for efficiency and re-engineering
  • Identifying a weak link in the Supply Chain
  • Developing a Shared Services model
  • Matching products and customers to the best channel
  • Market and Customer Segmentation for cross-sell and up-sell
  • Identifying idle resources and excess capacity
  • Providing the “return” component of a risk/return KPI
  • Supporting a regulatory approval process
  • Activity-based planning and budgeting of resources and capacity
  • Simulate changes to resources costs and process changes
  • IT charge-backs and carbon emissions management
  • Moving from Cost Centers to Profit Centers

Not only is there value here for finance, product management, operations, marketing and IT, even the executive suite can get in on the act via that last bullet point – the ability to utilize profit as a metric throughout the organization, not just at the highest, consolidated levels.  But it’s the aspects of process reengineering touched on in the first item that I want to dive deeper into this time around.

Consider the importance and magnitude of most business process reengineering projects.  The time, the money, the systems, the consultants – usually driven by some compelling external or internal concern – an acquisition, a new product line, competition and a topsy-turvy market, the need to innovate or improve quality, to change the corporate culture, to simplify, to cut costs drastically.  To transform the business.

Much hard work and consternation will go into the restructuring decisions – what to combine, what to reduce, where to invest, what to leave unchanged.  Work flows are simulated and reimagined and remade, the work itself is analyzed and time studied and reconfigured or perhaps outsourced as core competencies are revaluated and reprioritized.

Even if the CEO has a single, clear vision of what they want to accomplish, translating that into a specific organizational redesign entails the evaluation of perhaps dozens of alternatives, often on a trial-and-error basis.  Finally, after all the hard work has transpired, the new processes are documented, policy is updated, the switch is thrown and the Newly Reengineered Company is announced to the world.

Wouldn’t be nice if you could support this reengineering process with something more quantitative than just flow charts?  Wouldn’t it be nice if there was some way to quantitatively compare the alternatives other than with high-level spreadsheets?  Wouldn’t it be nice if a detailed financial impact could be assigned to both the revamped individual processes and to the restructured system as a whole?

Well, there is such a methodology, and it’s been in the management repertoire for several decades now:  activity-based costing.  While you are redesigning the processes why not run $$$, costs, through the restructured work flows?  Ideally you would benchmark your current process first, just to assure that all your proposed changes are making headway in the right direction.  Then you would have your process reengineering team work hand-in-hand with your activity-based costing and process modelers, creating financial simulations to match the proposed process changes.

What you will get in addition to a gut feeling, an educated guess or some highly suspect and highly rounded spreadsheet numbers about how effective the process redesign will turn out, will be some fairly hard numbers on what each process will now cost and how it will impact the cost and profitability of every brand and product.  Instead of restructuring with merely the hope and goal of lowering costs by 20%, you can make that 20% a hard target and model towards it with every process redesign step you make until you get there.

I can understand, however, why this hasn’t become standard practice quite yet.  Until SAS took activity-based costing in-memory, it may have been too cumbersome for businesses needing to restructure quickly.  But now with SAS Cost and Profitability Management, two terms you would not have thought you’d hear together in the same sentence, “agility” and “activity-based costing”, are there for you to make the most of as your transform your organization into an industry leader.

Post a Comment

External data: Radar for your business

How much of your business performance (profit) is driven by external factors versus internal?  A figure of 85% compared to 15% was mentioned at last month’s Manufacturing Analytics Summit, and although I could not find the study mentioned to confirm, it feels about right to me.  Certainly more than half, right?  So, how much of your dashboard reporting and KPI metrics incorporate external data?

world food indexI have to say that I have never much liked the saying, “managing using only internal data is like driving using just the rear-view mirror’, but until now I’ve not contemplated the logical flaw in the argument nor attempted to devise a better analogy (although, if I haven’t in the past written a blog entitled, “Forecasting – Your one forward-looking piece of data – Treasure it!”, then I should have). The reality, however, is that’s not how we run our businesses nor how we utilize internal data.  Our use of internal data is more akin to IFR flying or sailing a ship – our chosen strategy gives us a goal, a direction, and we use our internal data as feedback to make course corrections.  As long as the strategy / direction is periodically evaluated and revised, and the skies / seas remain reasonably clear and calm, relying mostly on internal data is a (barely) passable approach to management.

Except of course that the skies are anything but clear, and we’re not so much flying or sailing as we are navigating a much more complex surface terrain. You can’t just set a course and go – there are obstacles, obstacles with names like ‘competition’, ‘suppliers’, ‘customers’, ‘regulations’, ‘weather/climate’, ‘financing’, ‘politics’, and ‘markets’ just to name the most common.  At a minimum you need radar, more generally you need vision (maybe even the business equivalent of ‘night vision’), and at best you’d like to be able to predict / forecast what’s over the hill and around the bend.

Google’s driverless cars provide a better analogy than does the rear-view mirror:  300,000+ miles with only two accidents, neither of which were the car’s fault (one happened while it was being manually driven by a human, and in the other case it was rear-ended at a stoplight by another driver).  It navigates busy city streets utilizing an interconnected system of frontward, backward and sideward radar, 360 degree cameras, and GPS coupled with a stored map of the local topology. If it even has a rear-view mirror it’s only there to placate the redundant human.

Dun and Bradstreet was one of the sponsors at the Manufacturing Analytics conference, and spoke about their expansion into external data collection and analysis beyond just the customer/supplier credit scoring they are most known for.  They now make available a veritable treasure trove of data that can be used for customer segmentation, prospecting and supply chain risk management.  Nielsen and others have long provided syndicated media and POS data.  From financial institutions and organizations like Thomson Reuters there is an abundance of financial data regarding interest and currency rates and commodity prices / indices. Government and trade association data is available on subjects such as weather forecasting, market size and trends, risk management, and other industry and market news.  A local North Carolina firm of my acquaintance, Enlight Research, specializes in external data targeted specifically at the needs of the Board of Directors.  A number of consulting firms are even in the business of providing quantitative political risk assessments.

If 85% of your business results are driven by factors external to your organization, this kind of information needs to be an integral part of your executive dashboard and KPI’s.  Immediately.  No if’s, and’s or but’s.  That’s the bare minimum; that’s the radar; that’s what keeps you from colliding with the garbage truck abruptly pulling out of the alley.

But by itself, it is not enough.  It doesn’t tell you where to go nor how to set or adjust strategy, goals and direction.  It lacks the ‘vision’ and ‘prediction’ components.

If the aforementioned BI / dashboard / radar elements were married to a data visualization and analytics capability, then you’d really have something.  Such a platform would allow you to combine your external and internal data (even if they are siloed) for integrated reporting, KPI’s and metrics, forecasts, root cause analysis, and exploratory/insight-focused analysis.  Because it’s unlikely that the answer to your vision quest is in column H, row 53, page 16 of either your internal or external reports.  Because the right/best metric is likely one that uses external data in the denominator (or numerator).  Because 85% of your results are going to be impacted / driven by external factors that don’t go away just because they aren’t being surfaced on your dashboard.

A couple of weeks ago I wrote about developing, managing and changing corporate cultures (“Changing corporate culture is like losing weight”).  I talked about the various types of “cultures” an organization might aim for, such as: learning, analytical, innovative, customer-centric, quality, risk-taking, agile, or continuous improvement.  But I left one out, an important one.  If I had to pick just one culture to focus on, it might very well be one built around the awareness and usage of external data as a primary component of the organizations decision making process.  It's not just about being data-driven - it's about being driven by the right data.

Post a Comment

Analytics – Easy as One, Two, Tree

busine14Insights from decision trees and other basic analytic techniques show that you don’t always need complex analytics to solve business problems and add value.  This was the message from Dr. James (Jim) Foster, Director of Research and Process Development, Archer Daniels Midland (ADM), at last month’s inaugural IE Group ‘Manufacturing Analytics Summit’ in Chicago, which I had the great privilege to chair for both days.

Process manufacturers like ADM face a different production and quality challenge than their discrete manufacturing counterparts.  It’s not so much keeping your parts and components within tolerance as it is keeping your entire process within its operating limits. Neither do you move discretely from state or condition “A” to condition “B”, from an unfinished blank to a finished part, but instead the condition of the work product changes continuously as it progresses in time through the process.  Most importantly, there is typically not just one acceptable, final product specification, but a whole set of process specs that vary as the reactants move through the system, or through time even if the physical product remains stationary, perhaps in a vat or mixing chamber.

Jim started with a fairly basic process by describing how he and his team go about using decision trees to analyze and search for the root causes of process production problems, which I will greatly simply for the purposes of this post.

Imagine you are processing corn into one of dozens of its many possible end products, and that you need to regulate and control three production factors:  temperature, ph level, and enzyme concentration.  Research has determined that each of these three parameters must remain within certain basic limits in order to obtain a satisfactory final product: the temperature must not vary by, say, more than two degrees either way, the ph by 0.2, and the enzyme concentration by 2%.

You are of course monitoring this process, collecting data either continuously or at regular intervals, and either manually or automatically making corrections to keep everything copacetic.

But experience tells you that this is not all there is to the story.  Years of manufacturing knowledge will have shown that in combination, these three factors must further conform to even tighter tolerances for the process to succeed.  In other words, compared to when each factor is evaluated on a stand-alone basis, when combined and managed simultaneously, each tolerance might need to be cut in half.  Otherwise, experience shows that the viscosity may sometimes increase just enough to slow down the flow and gum up the works, even though by themselves the three parameters never departed sufficiently from the norm to degrade or destroy the end product, and therefore no alarms went off.

Example:  Let’s say that the mixing unit has lately been gumming up and stalling, even though no individual temperature, ph or enzyme alarms have been recorded.  A four-level decision tree analysis of the data (top level being the combined total, with each subsequent level separating out one of the three control factors) might show that 85% of these high viscosity incidents occur when both temperature and ph are operating at the very high end of their range, say 95%, and when the enzyme concentration is at the very low end.  You have just learned something valuable and can adjust the operating parameters accordingly, perhaps resetting all of them to 85% of their previous values, ensuring that now, even if they are all at their range extremes, the viscosity will remain acceptable.

For experienced production engineers this would be a relatively easy problem that typically wouldn’t require even something as simple as a decision tree to diagnose.  But real world process production problems are seldom this simple.  First - oh, if there were only three variables!  The complexity increases exponentially with each additional parameter, and once you get above half a dozen or so, nothing remains intuitive.  Having the data and the analytic tools becomes imperative.

Secondly, real world problems don’t always have just one, single root cause, something even the most experienced of production engineers can forget.  Multi-causal problems, Jim stated, is where having the discipline of an analytic approach really pays off.

Jim shared with us a second, more complicated example of the same sort - high viscosity causing machine shutdowns.  Production engineers used to looking for only a single cause were baffled that none of their corrective measures seemed to work.  Jim’s decision tree analysis of the production data showed that it was not just one path down through the decision tree, but THREE separate and independent paths, that were leading to the production problems. Changing parameters to solve just one of the problem paths was exacerbating the issues caused by the other two routes.  It was only when all three of the root causes were addressed simultaneously that the production process returned to normal.

Decision trees are only one of many basic analytic tools available to be put to good use like this.  SAS Visual Analytics provides you with an entire toolbox, an entire workshop even, of easy-to-use analytic capabilities, such as autocharting, which automatically chooses the graph best-suited to display the selected data, or the "What does it mean" capability, which automatically identifies and explains the relationships between variables.

Whether you are process or discrete, or any industry for that matter, you’ve got the data and the industry expertise, SAS has got the tools - all you need is the analytic discipline.

Post a Comment

Management productivity

Productivity.  How important is it?  As Nobel Prize winning economist Paul Krugman puts it, “Productivity isn’t everything, but in the long run it is almost everything.”

solution-productivity-bodyimageTypically the focus is on labor productivity, with the post-war results nothing short of phenomenal.  Total U.S. labor productivity doubled from the end of WW II to the mid 70’s, and has doubled again since then  (wages are a different matter, keeping pace with productivity through the oil shock of 1973-74, but then stagnating in the four decades since).  Narrowing the focus just to manufacturing productivity in the United States, real manufacturing output per worker roughly doubled during the 30-year post-war period ending in the mid 70’s, then doubled again in just the next 20 years to the late 90’s, and has doubled even once more in the last 15 years.

But what about the productivity of the other factors of production, of raw materials, capital and management?  As for raw materials, you could say that the oil shock of 1973-74 changed everything.  Per capital steel consumption abruptly leveled off and has been on a slow decline in the developed world since the mid 70’s.  The same holds true for oil and most of the other basic raw materials as conservation, reduction and recycling took hold.  The developed world has been trending “green” ever since it was forced to sit in those long gas lines that awful winter.

I will have to leave any discussion of the productivity of capital for another time, as its behavior since the dot.com bust and repeal of Glass-Steagall in 1999-00 would best be described as schizophrenic.  Still, its steady post-war increase up through 1999 would seem to be in accord with the overall labor and raw material trends.

That leaves us with the fourth of the four factors or production – management.  A cursory review of the literature shows no significant studies of management productivity.  It would appear that it’s never been measured by anyone, perhaps not even operationally defined.  But that doesn’t mean we can’t speculate. What would you say – has management productivity increased over the past 35 years?  In line with labor productivity?

One thing I do know is that like everything else, management has gotten more complex.  First there’s the product technology, and then there’s the infrastructure technology.  What we make has gotten more complex (computer controlled, fuel-injected engines, anyone, or microwave ovens?) and the information technology we use to keep track of our operations has gotten more complex as well.  And now with more remote and telecommuting employees, basic supervision hasn’t gotten any easier either.

But if we were somehow measuring output per manager, would that metric be getting any better?  Span of control probably hasn’t changed much in 50 years.  Productivity gains in the service industries has significantly lagged the manufacturing sector (another good story for another time).  And who hasn’t heard stories and case studies of ‘bloated management’ structures being trimmed, flattened or decimated by the new sheriff in town.

As I made the subject of my very first “official” Value Alley post, “Stuck in the Middle”, it is middle management that has the most difficult management job. It’s here that the resources of the company meet the needs of the customer, and yet they are often ill equipped and poorly supported and trained for this role, divorced from the centers of power and strategy, and burdened with high expectations and stretch targets.

What are we doing to help them, and by extension, help our respective organizations?  What are we doing to improve management productivity?  While there are surely a dozen or more areas we could dissect and focus in on, here are four themes that I would make my initial priority.

  • A robust BI platform.  Attack your BI initiatives with the same determination you devote to operational automation.  Do they have everything they need on the dashboard?  And by everything, I mean everything automatically loaded and available.  Not 8 out of 9, with the ninth requiring manual input from an offline spreadsheet.  The manual steps are productivity killers.  Spreadsheets are fine in the domain of the finance or operations analyst, but suck the life out of management productivity.
  • Data integration.  Upstream and downstream.  Cross functional.  Internal and external.  Vendors and channels and customers.  NO SILOES!  If management needs to compare costs with shipments and inventory by territory and customer, and needs to get into five different systems or siloes to do that, you’re doing it wrong.
  • Process management.  As I discussed in my early “Stuck in the Middle” post, process is the realm of middle management – process design, process management, process monitoring.  Process is what enables repeatable value creation.  Analytic support for process modeling, simulation and decision management is key to improved process management productivity in a complex environment.
  • Workforce Analytics.  It’s 9:00 a.m. Monday morning; do you know where your employees are?  As I pointed out in this post, “Strategic Workforce Planning”, you probably have more system and IT resources invested in tracking office supplies and spare parts than you do in managing your critical human resources.  For most businesses, employee-related expenses (salaries, benefits, taxes, training) represent the single largest cost category. Workforce management is the one area that has adapted the least to a business environment of increased productivity and complexity.

Truthfully, though, it’s that last one, people management, that makes improving overall management productivity such a hard nut to crack.  Just last week I was advising a young grad student in the biological sciences, a friend of my son, on her desire to follow that up with an MBA (it's a pleasant surprise when your college-age children not only accept your advice, but recommend you to their friends).  What I told her was that “of the three primary elements to be “managed” – people, technology and money – people are by far the most difficult component.  As you get older and progress in your career and move upward and take on more and more responsibility, the biggest, toughest challenges you will face will have nothing at all to do with your science, with your technology, or with your cash flow.  The biggest challenge will be the people.  And the most important classes you will have taken will turn out not to have been calculus or biochemistry or corporate finance, but psychology, sociology, anthropology, philosophy and literature.  It will become WAY more important to understand the motivations of Iago or Yossarian than to be able to calculate the genetic variation of micro-ecosystems across time.”  Shakespeare and Freud on your dashboard (or at the very least, in your core skill set).

Post a Comment

Changing corporate culture is like losing weight

Why is it so hard to achieve lasting, significant change in your corporate culture?  Because your organization is like a living organism, an organism that wants to maintain homeostasis against a changing environment.

My good friend Claire Breeze, co-founder of Relume and co-author of “The Challenger Spirit”, recently invited me to participate in a  creative day focused on finding out what gets in the way of adopting an approach to learning that is social, connected, informal and immediate – what I’ll call a ‘learning culture’ as shorthand.  In preparation, my starting point became:  What gets in the way of ANY type of cultural change?

There are any number of ‘cultures’ that an organization might adopt:scale

The first step is to draw a picture of what the new, desired culture would look like. What will success look/feel like?  You need to draw this picture in terms of ‘behaviors’, not results.  What half-dozen or so key organizational and/or individual behaviors will be different in your target culture from “how things get done around here” today?

What I mean by focusing on behaviors would follow from the example of coaching a baseball team.  You want to win more games, make the playoffs, win the championship.  Those are goals and, after the fact, results.  But that’s not what you work on in practice, you don’t specifically practice “winning”.  The key behaviors of a winning team might be high batting average, low ERA, and a good fielding percentage.  So as a team in practice you work on turning the double play, the pitcher covering first base on an infield hit, executing a hit-and-run or a double steal, and individually you work on hitting mechanics or mastering a new pitch.  And then there’s the scouting, learning where to best pitch an opposing batter and where to play him in the outfield.

Bringing this back home, for a customer-focused culture, your key behaviors might be: 1) everyone who comes in contact with a customer has available to them a consistent, up-to-date set of customer data, 2) flexibility in the processes that directly affect a customer and the decentralized authority to implement such customer-friendly variations, and 3) having the right assortment, size, model, color, at the right time and the right place.

Next comes the levers at your disposal to effect such change.  A list of the most common management tools would likely include:

  • Organizational structure and design
  • Rewards, incentives, recognition and performance management / metrics
  • Tools, resources, systems, data and processes
  • Hiring / selection / training / orientation
  • Leadership / stories / heroes / values / communication

If you’ve made it this far, that was the easy part.  Countless organizations have implemented such change management plans only to see little in long lasting change and results.  Why?  Because you haven’t addressed the feedback loop, the thermostat, that exists in every organization to maintain normalcy and stasis against a changing environment.  You’re trying to enact change against organizational processes that have evolved to specifically minimize change.

When you think about it, it’s a normal and expected response.  Things get done around here because disruptions are minimized.  I touched on this in a previous post, “Metrics for the Subconscious Organization”, where I pointed out that your organization pretty much runs itself on auto-pilot; most of the time it would hardly notice if you took a couple of months off.

It’s much like attempting to lose weight.  You try to make the behavioral changes - counting calories, eliminating fat or carbs, smaller portions, going vegan or gluten-free - but your body thinks your brain is trying to starve it, and so it reacts accordingly.  It’s what makes progress and permanent weight loss so difficult.  You have to reset the thermostat in order to achieve long-term results.

Your organization is the same way.  Yes, there might be some deliberate, sociopathic “blockers” out there, but by and large your employees are just trying to keep the wheels turning the best way they know how.  In some circumstances, using the starvation metaphor, they may even sense a threat to themselves, their positions, advancement, power or career, and they are simply doing what comes naturally and has likely worked well in the past – protecting themselves by resisting change.  It should come as no surprise that they would try to 'expel the invading foreign body' (cultural change) just as a human body would react to a bacterial infection - if stasis and health is to be maintained, this change in/from the environment must be dealt with.

Prescriptions for resetting the thermostat are hard to come by; if it were otherwise, change management, and weight loss, would be easy.  But, like biological organisms, your organization is also quite capable of evolution and adaptation in the face of a changing environment. I might suggest these approaches (likely reinforced in combination):

  • Change the environment.  Align your levers – organization, tools, incentives, training, stories -  so as to encourage evolution in the desired direction.
  • Disrupt the old environment.  Make it so that there is nothing to go back to.  This regularly happens with acquisitions, and can likely be done internally as well.  Create the pain often necessary for change – the “burning platform”.
  • Change the context.  Reinterpret the organization’s history, mission and story. The old behaviors only make sense in a particular context.  This means creating/providing a new context, and phasing out the old context such that the old behaviors are no longer effective or rewarded.  Again - apply your levers.
  • Address the safety issue:  Keep Maslow’s Hierarchy of Needs in mind and don’t just focus on the valiant goals of the upper stages; address your employees’ basic need for security, make it clear they still have an important role and a home in the new environment.

Claire, of course, already knows all of this, but I had to work through it myself to arrive at a point where I could address the specifics of the learning culture task she set me, such as:

  • Risk-taking, experimentation and trust
  • Systems-thinking
  • A focus on personal mastery
  • Shared learning, a shared vision and goals
  • Insistence on spending the training budget, development and training as more than an afterthought or a tick-in-the-box, a learning “contract” as part of performance management
  • A focus on people as unique, creative individuals instead of as assets or costs to be minimized
  • Leaders setting the example and showing commitment (and thus showing that learning is “safe” – when was the last time you heard of a VP taking a company-sponsored course?)
  • Post-mortems that focus on what was learned and not just what went wrong and who’s to blame
  • “Hero stories” that link success to past learnings
  • Clear objectives that align learning and training to business strategy
  • The right environment, perhaps personally tailored to individual learning/team member styles, valuing differences in what is learned and how it is learned
  • Build learning into the work environment / process - make knowledge sharing an organizational habit

Whatever type of culture you want to create – customer-focused, data-driven, quality, learning -  go through this process, identify what the new target culture would look like, what behaviors would become commonplace, what levers you have available to effect this change in behavior, what new tools, data, processes or systems might be required, and most importantly, what is your plan to overcome the stasis; what are your tactics to reset the organizational and cultural thermostat.

Post a Comment

Agility and the Analytic Sandbox

Analytics gives us not just the ability but the imperative to separate our planning activities into two distinct segments – detailed planning that leads to budgets in support of execution, and high-level, analytic-enabled business/scenario planning.

My critique of Control Towers in this blog last time led me not only to consider the role and relationship a control tower  might play in the planning process, but also to evaluate the overall planning process itself.  This appraisal has in turn caused me to reassess the approach I introduced some time ago in this post, “Rolling forecasts, or Who ordered that?” and to restructure the diagram representing my view of the ideal business planning process.

In that previous structure I envisioned a three-level process structure, with the Strategic Plan and the Forecast at the highest level, informing an 18-month rolling PLAN (not forecast) in the middle tier, driving the Budget(s) at the lowest level.

In my last post I somewhat castigated the emerging universal control tower approach, which purports to solve practically all your business problems including hunger and world peace, an approach where the overstuffed control tower included capabilities spanning from analytics to simulation to alerts to dashboards.  I tried to make the case that the control tower is fundamentally tactical and best suited to supporting operational execution – it’s not a strategic platform.

But still, there does seem to be the need for a control-tower-like capability in support of strategy and high level planning, an agile capability that mirrors at the strategic level the executional agility a control tower provides at the operational level – an entity which I am going to label the Analytic Sandbox.  Not a new concept to be sure, but refining the definition of its proper role does help to clarify its relationship to the overall business planning process.

Sandbox1The key insight is to keep this analytic package together, but to deploy it where it does the most good, not in support of execution, but in support of scenario planning.  This in turn requires dividing our current monolithic planning process in two – the detailed single-scenario plan that eventually spawns an equally detailed budget, and the high-level business planning where agility has recently become paramount if not mandatory.  Resident inside of this business planning process is the Analytics Sandbox – a combination of agility with the power to know.

Elements of high-level Business Planning with the Analytic Sandbox:

  1. Scenario Planning (for options, pessimistic/optimistic, best case/worst case, etc …)
  2. Capital Planning
  3. What-If planning, Pricing
  4. Activity-Based Budgeting
  5. Data Exploration / insights (i.e. Tell me something I don’t know)
  6. Simulation
  7. Risk Management
  8. Strategy and Planning Dashboard  (linking strategy with objectives, goals and metrics)
  9. Forecasting / Predictive analytics
  10. Marketing Management / Social Media Analytics
  11. Supplier, Facility, IT, Human Resource and Capacity Planning
  12. Product Planning

Elements of detailed business planning and budgeting:

  1. S&OP / Supply and Demand Planning
  2. Optimization (inventory, production, logistics, marketing, etc …)
  3. Disaggregated forecasts
  4. Operational plans (PLM, production control, procurement, logistics, after-market service, maintenance, etc …)
  5. Departmental, Project and Program Budgets / Resource Allocation

Elements of Execution Management:

  1. Operational Dashboards
  2. Control Tower
  3. Quality Control
  4. Measurement, Metrics / Closed-loop and OODA Feedback (to strategy and business planning)
  5. Event Stream Processing / Decision Management
  6. Digital Marketing

While both concepts enable organizational agility, what I think the difference is between a Control Tower and an Analytics Sandbox is the scale of the response.  The Control Tower is about the agility to adjust near-term operations in order to meet customer expectations and obligations; the Analytic Sandbox is about the agility to adjust organizational strategy and associated business plans in the face of market forces.

We are accustomed to being agile with our operational execution – no organization gets through the day without making dozens if not thousands of little adjustments along the way.  Whether or not we have a formal Control Tower, we have been doing control-tower-like activities forever.  What has not yet become commonplace are the tools and approaches that allow us to extend that agility to the larger scale and scope of the entire organization and its strategic concerns.  Not commonplace yet, no, but available, YES – Analytics and the Analytic Sandbox, most definitely, YES!

Post a Comment

Control Towers - Not another business process?

The volume is being turned up on the Control Tower approach to running a business; I have recently been introduced to logistics control towers, supply chain control towers and operations control towers just for starters.  I’m sure there must be at least a half dozen more out there – pick a noun, place it out front and voila, your very own control tower du jour.

By the time I got to the third one, I realized that these people were serious, and my initial reaction upon considering the implications was – Oh no, not another business process!  Does it replace something?  Does it consolidate multiple somethings into one something?  Does it provide me with a new capability?  If it doesn't do at least one of these, why would I bother?

Perhaps in the end the control towers will prevail, I will have been shown to have  overreacted, and the answer to complexity might indeed be more complexity.  But I’m not going down without putting up at least some token resistance, so what follows are a few critical factors I think you should evaluate before taking the control tower plunge, with the understanding that by borrowing the pre-existing label / framework / metaphor, the airport control tower, it means we’re borrowing and building on pre-existing concepts that will influence our expectations.

  • OPERATIONAL:    Control towers are for operations, they are tactical, not strategic.  Control towers are about execution, they are not for planning or for simulations.  I have seen presentations where everything but the kitchen sink is thrown in for good measure – a dashboard, some analytics, some alerts, some simulation, some reporting, some optimization.  Good grief, the end result would more resemble NASA Mission Control than an airport control tower. 
  • INTEGRATION:  I have likewise seen presentations where, in the name of “visibility” a worthy goal indeed, the objective becomes creating an END-to-END control tower, from the tier-n supplier to the end consumer and everything in between. This is not how real world airport control towers work.  Most airport operations divide their control tower functionality into three, more manageable, segments: ground control (gate-side to taxiway, including ground vehicles), the local or “tower” control (active runways and close-range airspace) and regional airspace control, with carefully orchestrated handoffs between each. 

Likewise, I think we would be better off focusing on improving our data integration and better coordinating our own internal handoffs than in building something unmanageable in practice.  The end-to-end visibility is still there, but just in manageable chunks.  While it’s always good to have a noble, motivating high-end vision in mind, often it is more productive to simply set our sights on meaningful, achievable, incremental improvement. If you are at stage 1 or 2 on a five level maturity scale, getting to stage 3 and staying there can be so much more important than aspiring to a level 5 goal that never gets any closer.

  • AUTHORITY:  Are you ready to give your control tower complete authority over the cross-functional processes it governs?  Because if not, you are just wasting your time.  An airport control tower works precisely because it has complete authority, the air traffic controllers are the gods of their domains, superseding even the airplane captains.  Can you invest that kind of authority in your control tower personnel, over and above that of the functional domains, the department managers and division directors they are meant to be coordinating?  Personally, I do think this is where business needs to evolve to (see my argument in favor of senior VPs, reporting to the CEO, in charge of each cross-functional “Value Discipline” in this post – “The Sound and the Fury of enterprise-wide process management”), but if that is the goal, then I think control towers are half-way measures doomed to failure.
  • BUSINESS RULES:  If control towers are about operations and execution, then their value derives from their ability to quickly identify and respond to issues as they arise in real time.  The ideal front-end for a control tower would be an event stream processing or decision management application, triage for the incoming data, with the ideal platform a visual BI/analytics tool.  Some problems could be dispatched automatically with no human intervention required.  Others may simply need all inputs displayed for human evaluation, with a decision made based on comparisons, trade-offs, triggers and priorities.  The most difficult situations might require real-time inputs to be married with supporting static data (i.e. customer, product, inventory, operational capacity, etc …) so that a more informed decision can be arrived at.
  • METRICS: Lastly, if a control tower is going to fulfill its role and promise, then it needs its own operationally-oriented set of metrics, and not the metrics emerging from the overall planning process.  If an event has come to the attention of the control tower via an alert, then something has already gone “wrong”, there has already been a deviation from plan.  The control tower’s job at that stage is to make the best of a bad situation.  The control tower is not optimizing at this point – the plan has already been previously optimized - the control tower is simply trying to keep deviations from plan as small and as inconsequential as possible.  You don’t hold the control tower to achievement-of-plan metrics - that's for the rest of the organization - you hold them instead to metrics regarding how well they managed the disruption. 

Despite my initial misgivings, I am trying to keep an open mind on this topic, and will be closely watching its evolution and maturity.  Control towers may very well have their operational place in an organization, but I am skeptical that they will have a strategic role to play – more on that next time.

Post a Comment

The future of shopping

“Within ten to fifteen years, the typical US mall, unless it is completely reinvented, will be a historical anachronism—a sixty-year aberration that no longer meets the publics’ needs, the retailers’ needs, or the community’s needs.”  So proclaimed Rick Caruso, founder and CEO of Caruso Affiliated, a retail/commercial real estate development firm, at NRF earlier this year.

Is his concern properly placed?  How bad is it?

Bad.  So bad that there is even a web site called “Dead Malls”.  I counted over 400 listed in the U.S. alone.  In some cases the buildings have been converted into educational or other commercial use, in others the parking lot was saved but the mall demolished to be replaced by a big box store.  Many, many others are simply boarded up, closed off to the public, awaiting disposal.

Ecommerce of course accounts for a portion of the impact to the traditional mall, but even now, in the middle of the second decade of the 21st century,  online shopping represents only 6% of total retail commerce, growing at a rate slightly in excess of 15% a year. 

But the internet’s impact has been far greater than just that 6%, it has changed the entire shopping experience, birthing multichannel marketing and the omnichannel consumer.  I don’t think I can singlehandedly save the mall with the remainder of my 900 words today, but I do want to address in broad terms some potential scenarios.

  • Manufacturers as retailers.  If you make a consumer good, this transition is likely inevitable and has been underway for some time already. While you might like to hold out with your traditional distribution model for as long as you can, the demise of the mall may force your hand.  It won’t be pretty, neither the channel conflict nor your noticeable lack of retailing expertise as you attempt to sell direct to the consumer.  There will be no trading off the retailer's brand anymore– for better or for worse, it’s 100% your brand that matters.
  • Retailing reinvented.  As I did with my defense of Boeing regarding it’s Dreamliner outsourcing strategy, where they outsourced substantial portions of the R&D effort  (I believe that despite the initial complications, this is a concept that was, at base, sound, just poorly implemented, and will resurface again), I likewise defend Ron Johnson’s attempts to turn JC Penney around, which in retrospect was probably a poor target for this sort of a transformation, but once again, I believe a similar approach will resurface more successfully with some stronger retailers, perhaps those not stuck with anchor stores in already failing malls.  Johnson saw the warning signs and attempted to reinvent the department store as a city street / square more in tune with the changing expectations of the consumers’ shopping experience.
  • Reinvented malls.  It is telling that Victor Gruen, the father of the enclosed mall, was appalled at what the mall became – stranded amidst acres and acres of blacktop parking lot, a fortress with an asphalt moat.  His vision then, and that of Rick Caruso today, was/is of a more integrated outdoor shopping experience, perhaps the mall as the high street, the social center of a community that includes housing, schools and libraries, even a medical center.  Such a reinvention likely entails the concomitant reinvention of your distribution strategy, with fewer big box and department-type stores and more specialty or category killers.
  • The triumph of the Big Box.  Then again, maybe the reinvented mall never catches on, and in its place: the one-size-fits-all / carries-all distribution center.  A showroom to test and compare, followed by anytime-online ordering, then back to the big box for pick-up.  Bleak, yes, but for all except the higher-end goods, who needs an “experience” when it usually comes down to price and convenience for the bulk of our consumer purchases.
  • Online, all the time.  Five years ago I would not have made this stark of a prediction, but since then, the smartphone has changed everything.  Five years ago, if you had claimed that online shopping would come to dominate retail commerce I would have raised as my first objection the fact that so many people still lacked the broadband internet capability at home necessary to make the transition.  But now the lack of a PC server or laptop at home is no longer an obstacle – everyone has, or will have, a smart phone, and we’ll soon be wondering, ‘smarter than what’?  Everything will be smart – your sneakers, your sweatshirt, your refrigerator, your car, your community.  What will get reinvented is not the mall or even the shopping experience but the social experience as a whole.  Shopping?  Was that something people once did after they got the horses fed?

Which of these, or which combination, if any, will come to pass?  After my bracket-busting disaster in this year’s March Madness (I picked the ONLY twelfth seed not to win (I had no choice - all three of my kids go there), and NONE of the other twelfth seed upsets that did happen) I am loath to prognosticate further. But one thing that is certain is that no matter which scenario comes to dominate the retail space, change is on the way, and you are going to have to get closer to your customer.  You are going to have to know more about them, their changing buying and channel habits, and the type of shopping experience they prefer.  Customer analytics will come to drive your business strategy in recognition of the fact that it has always been the consumer that ultimately decides whether that business strategy is a success or a failure.

Post a Comment