Why analytic forecasting?

Candidate productsBecause you are already halfway there and you should want the entire process to be data-driven, not just the historical reporting and analysis.  You are making decisions and using data to support those decisions, but you are leaving value on the table if the analytics don't carry through to forecasting.  In the parlance of the domain, don't stop with just the descriptive analytics while neglecting the power of predictive and prescriptive analytics.

Descriptive analytics relies on the reporting and analysis of historical data to answer questions up until a particular moment in time.  Using basic statistics such as mean, frequency and standard deviation, it can tell you what happened, how many, how often and where?  With the application of additional statistical techniques such as classification, correlation and clustering, you end up with an explanatory power that can sometimes even tell you ‘Why’.

In the terminology I proposed in this earlier post, “The Skeptical CFO”, descriptive analytics covers the first two of my first four points:  “Where am I right now”, and “What is my ability to execute”, the latter typically surfaced through a BI capability that computes and displays the historical data in the form of metrics for ease of standardization, comparison and visualization.

FS model vs FAW model
But why stop there?  Why stop your data-driven approach to decision making at the halfway point, at the vertical bar in the above graphic?

Your decisions are always about the future – what direction to take, where to invest, what course corrections to make, what markets to expand into, what and how much to produce, who to hire and where to put them.  In other words, a forecast, the third of my four points, with the fourth being perhaps the most important of the lot - a confidence level or uncertainty measurement about that forecast, these last two coming from the realm of predictive analytics.

Even if you’re not comfortable using the statistical forecast straight out of the box, don’t you at least want to know what it indicates?  What data-driven trends and seasonality it has on offer?  And wouldn’t you appreciate having a ballpark estimate of the risk and the variability that is likely inherent in any forecasting decision?

Getting back to that “straight out of the box” issue, the truth is, for roughly 80% of your detailed forecasting needs (depending of course on the quality of the data and the inherent forecastability of the item in question – see: “The beatings will continue until forecast accuracy improves”), the machine is going to be as or more accurate than you, and much, MUCH faster at it.  The forecast analyst workbench listed below can generate incredibly high-volume forecasts at the detailed level (i.e. SKU, size, color, style, packaging, store, expense line item, cost center …) in short order, leaving the forecast analyst free to spend the bulk of their time improving on those hard-to-forecast exceptions.

Lest you doubt the veracity of my 80% (+/-) claim above, the collaborative planning workbench (below), in addition to facilitating the consensus forecast you would expect from its name, also includes a Forecast Value Add capability to identify and eliminate those touch points that are not adding value.  You would be surprised at how many reviewers, approvers, adjustments, tweaks and overrides actually make the forecast worse instead of better (then again, maybe you wouldn’t).

Can you be data-driven when it comes to new product forecasting?  If you’ve got the structured judgment / analogy capability of the new product forecasting workbench then the answer is yes.  It uses statistically determined candidate analogies or existing surrogate products with similar attributes to provide an objective basis for predicting new product demand.

Beyond predictive analytics, which provides answers to 'What if these trends continue', and 'What will happen next', lies prescriptive analytics – what SHOULD I do; what’s the best, or optimal, outcome?  The inventory optimization workbench optimizes inventory levels across a multiechelon distribution chain based on constraining factors such as lead times, costs, and/or service levels.  And just as with the forecasting component, 80% of the optimization can be automated, again leaving the inventory analyst free to focus on hard-to-plan or incomplete orders.

When you hear the word “optimization” in this context, think of two elements: a forecast, and a corresponding set of constraints.  Knowing that context, you can see why SAS has taken this integrated workbench approach to demand-driven planning.  A common foundation and data repository enables the consensus forecast, as well as collaboration between the forecast and inventory analysts.  Even the purely descriptive component, the demand signal analytics workbench, is completely integrated with the same demand signal repository that will eventually build the forecast and the inventory plan.

DDPO2

When it comes to decision support, don’t settle for halfway.  Because half of the value-add lies to the right of that vertical line. It all starts with the forecast, which drives the integrated business planning (IBP) process and is its largest source of variation and uncertainty. Improving the forecast will affect everything downstream. And, it can have a multiplier effect as it travels along the IBP process. Even slight forecasting improvements can have a larger proportional effect on revenue, costs, profit, customer satisfaction and working capital than any other factor – financial, supply-oriented, or otherwise.

Get the forecast right, and good things will follow.

Post a Comment

Impending Crisis: Analytics for the top-line

I’m sitting here staring at a book on my shelf entitled, “Impending Crisis”.  Even knowing the copyright date, 2003, it could still be about any one of several possible crises: healthcare, financial, energy, education, environment.  But no, in this case the impending crisis in question is provided by the subtitle: “Too many jobs, Too few people.”  A perfect storm of demographics, education and technology that was supposed to hit the Western economies by the end of the decade, a crisis ultimately stillborn, upstaged and derailed by its antithesis – The Great Recession, with its concomitant double-digit unemployment.

Predictive-Analytics-for-Human-Resources-Wiley-and-SAS-Business-SeriesBut still, it was there on my bookshelf for a reason.  If the derivative-driven economic implosion of 2008-‘09 had never happened, the book’s thesis represented a most likely case.  At the time, the US Bureau of Labor Statistics was predicting an overall workforce shortage in the US of about 10 million workers by 2010.  A decade has now passed since its initial publication, so besides the Great Recession, what else has changed?

The demographics are what they are, but with everyone now placing an additional ten candles on their birthday cake.  The state of education may be even worse, with No Child Left Behind turning into No Child Left Untested.  The cost of higher education is the fastest increasing segment in the national economy, outpacing even healthcare, as the ratio of full-time faculty to management and staff declined from about 2:1 twenty years ago to roughly parity today.

Technology seems to be the big unknown.  For a thorough perspective, I highly recommend this study from the Pew Research Center: “AI, Robotics and the Future of Jobs”.  To illustrate what a challenge this subject is, the nearly 2,000 respondents were roughly evenly divided on the question of the future of jobs, with 52% taking the non-Luddite view that there is nothing as constant as change and that in the end more jobs will be created than lost.  The other 48% would likely find themselves more in agreement with Bruce Springsteen, who wrote in “My Hometown: “These jobs are going boys and they ain't coming back”, their main supporting point being, ‘You want evidence?  Just look around, it’s happening now, it’s happening everywhere, it’s been happening since at least 1990 if not before.’

One more datum to add to the mix:  $6 trillion.  Or make that $35 trillion if you are thinking globally.  That’s the annual labor cost in the US / World respectively – representing 40% of US GDP, 50% at the global level.  So when you impact labor productivity by more than a few percentage points, you’re likewise talking trillions (for comparison, energy costs run about 10% of GDP).

Stepping into this ill-defined, undiscovered country from which perhaps no job returns, is strategic human capital expert Jac Fitz-enz and his co-author, John Mattox, with their new book, “Predictive Analytics for Human Resources”.  What makes this such a worthwhile read for anyone interested in applied analytics is the authors’ broad general business experience.  If you want, you can take their analytic approach completely out of its HR context and drop it wherever you are facing an analytic need.  Whether it’s the chapter on ‘Getting Started’, or ‘Data Issues’, or ‘Analytics in Action’, or my nomination for best-in-show, ‘Developing an Analytic Culture’, this is the analytics primer you’ve been searching for, no matter whether your business problem is quality, customers, process or people.

While the obvious application of analytics to human capital (see my prior post, “Strategic Workforce Planning”) is the cost impact, hence the $6 trillion reference above and its prominence throughout the book, I want to direct your attention to the issue from the other direction, the top-line versus the bottom line, and the mixed realities of the post-recession employment picture.  Moreover, I want to tie all of this into another important business paradigm – Treacy and Wiersema’s ‘Value Disciplines’.

Official unemployment in the US currently stands a tad over 6%, with the unofficial rate, which counts those who have stopped looking for work, at 14%.  The comparable figure for the EU is slightly over 10%, with the extremes running from 5% in Germany to over 25% for Spain.  With that many people out of work, who needs workforce analytics?  Just run the ads and take the lowest bidder, right?

Not so fast.  If your chosen Value Discipline is Operational Efficiency, then you most likely aren’t hiring in the Western economies anyhow, you moved those jobs offshore long ago.  On the other hand, if your Value Discipline is Innovation or Customer Intimacy, cost is not your primary concern (a truism whether your specific business problem is workforce, or something else like quality, innovation, service or retention, and a truism your approach to analytics should reflect).

What is your concern is the shortage of STEM and skilled workers - the lingering high unemployment rate being a rather asymmetrical affair, primarily affecting the lower skilled job classifications. Besides, you’re not looking for the cheapest engineer, scientist, cyber-security specialist, nurse or marketer.  There are multiple stories making the rounds of manufacturers in rural, low-wage regions of the country with 100 applicants for each shop floor position, but unable to find and attract the design and manufacturing engineers and the management to run the place.  Silicon Valley’s recently uncovered anti-poaching cartel is certain proof for the reality and seriousness of the issue.

The benefits of using an analytical approach to addressing STEM and skilled workforce management issues will show up in the revenues, not just on the bottom line, of those companies that depend on innovation, quality and customer service as the foundation of their business model, and who need the right people, not just the least expensive, to make that business model work.  As the saying goes, you can’t just save your way to prosperity, eventually you need to put the emphasis on growth.

Lest you think this STEM shortage is fairly straight forward and one-dimensional, let me scare you to death with reference to this series of posts on LinkedIn by Heather McGowan - “Jobs are Over: The Future is Income Generation”  (the link is to Part 2 of this four-part series, Part 2 being where I became truly frightened for my children’s future) (and I won’t even get into the unnerving picture that Fitz-enz paints at the end of Chapter 7 of his book – I’ll leave that for you to discover – just don’t be taking any three-day weekends).

Here’s McGowan in her own words: “The era of using education to get a job, to build a pension, and to then retire is over. Not only is the world flat, but this is the end of employment as we once knew it. The future is one of life-long learning, serial short-term employment engagements, and the creation of a portfolio of passive and active income generation through monetization of excess capacity and marketable talents.”

Let that sink in for a bit.  A future that some might call entrepreneurship, but others might label ‘gigs’, everyone a temporary contract worker, no benefits, competing to create monetized portfolios (how would you have fared in your twenties or thirties, trying to start a family, under such conditions?).  Is your business ready to address a workforce strictly defined by contractual short-term gigs and monetized marketable talents, whatever that might mean?   While you might initially think you’ve got the upper hand when it comes to employment negotiations with such relatively insecure ‘income generation seekers’, focusing on cost, as I mentioned before, would be missing the point for most organizations.

I’m not saying McGowan is right (and I’m hoping she’s wrong), but I do have to admit that the trends she identifies are all already here. The ‘market economy’ is becoming the ‘market society’, with little indication that this socio-economic movement is slowing down let alone running into obstacles that might halt it. In such an environment, and without a far-sighted, disciplined and analytic approach to workforce planning and management, you’ll end up with a top-line going nowhere fast, and a bottom-line spelled I-R-R-E-L-E-V-E-N-T.

Post a Comment

Cloud Encounters of the Fifth kind

Wow_signalCSETI, the Center for the Study of Extraterrestrial Intelligence, defines a “Close Encounter of the Fifth kind” as an event that involves direct communication between aliens and humans (a “Close Encounter of the Third kind” would be one in which an animated creature is present).  So I think Spielberg misnamed his movie by two whole steps.  We most definitely had some direct communication going on there, beginning with the iconic five tone sequence of B flat, C, A flat, (octave lower) A flat, E flat, progressing to the point where the technician announces, “We have a translation interlock on their audio signal – We’re taking over this conversation, … NOW!”, and the computers and the keyboard and the mothership go about their business without any further human involvement.

While it has been a couple of years since we passed the point where more than half of all Web traffic became non-human, mostly search engines, bots and spam, when it comes to the internet as a whole, video and media / gaming still holds sway at 50%+ of the transmitted bits.  The peer-to-peer segment (P2P) currently comprises about 20% of the total, largely dominated by file sharing and financial trading, but masked within it is the fastest growing component:  machine-to-machine (M2M), expected to grow by more than an order of magnitude within just the next five years.  When it comes to the Internet of Things (IOT), the future clearly belongs to the Things.

There is a well-known saying that a stopped clock is right twice a day.  I once heard the philosopher of communications Marshall McLuhan described as a clock that was only right once in a hundred years, but when he was right he was dead on (i.e. “The medium is the message” and the “global village” from “Understanding Media” and “The Gutenberg Galaxy” respectively). Ken Olsen, the founder and former CEO of Digital Equipment Corporation would probably fall somewhere in between.

Olsen is today most infamously remembered for his “snake oil” comments regarding UNIX, and, when taken somewhat out of context, his dissing of the personal computer.  I worked for Olsen for eight years, and what gets left out of his PC comments was his vision for the future of information – “information as a utility” he called it.  Combined with the standardization of Ethernet in the 1980’s, he foresaw people plugging into the wall for information just as they might plug into an electric socket or connect a home to water and gas utilities.  Thus the ubiquitous VT220 terminal of the times, the smartest dumb terminal ever made.

Olsen’s vision was spot on, but his timing left a little to be desired.  That time is drawing closer, step-by-step, piece-by-piece.  We’re seeing components, such as the Cloud, the Web, the IOT and Analytics at the Edge, along with mobile and social technologies, grow, mature, connect and overlap.  However, as the author William Gibson noted, “The future is already here — it's just not very evenly distributed”.

The Cloud is one of those currently unevenly distributed elements.  I think the word itself will go out of non-meteorological use within a dozen years or so as information becomes more of a utility and cloud computing becomes a commonplace.  Mentioning the Cloud will one day date you just as talk of rotary phones and punch cards (and VT220’s) dates you now.

Four years ago when I started chairing financial conferences for the IE Group, the primary concern when it came to the Cloud was data security.  Four years later, CFO’s seem to have become more comfortable with the security issue, and now the reluctance comes from a different quarter.  Half of the effort and cost for any large system or process re-engineering initiative occurs on the front-end: the data, documentation, admin and policy and procedure clean-up.  You just can’t throw your mess over the wall and into the Cloud and expect it to work, and neither will your Cloud partner accept it.  What I hear from these same CFO's today is that once they’ve cleaned up their act, they rather like what they see, and then proceed by keeping the back-end of the project in-house.

But in this uneven world in which we find ourselves, I think it’s going to be the security issue that actually drives businesses to deploy more rapidly to the Cloud.  The Cloud providers, of course, will all insist that since it’s your data, the data security liability is yours and yours alone, but wouldn’t you rather be behind a firewall that has TEAMS of cyber-security professionals separately dedicated to the primary security threats, experienced cyber-security teams dedicated to threats emanating from China, Russia, India or even the U.S. (“Top Ten Hacking Countries”).  With cyber threats becoming both more numerous and more complex all the time, it’s already tough enough for most firms to fill key data security roles, let alone compete with Google, Amazon, Microsoft and the NSA for the top talent.

Before information truly becomes a utility there are still some business and content creation models to be worked through, conflicting standards to be ironed out, and turf wars over gate-keeping and rent-seeking to be fought, but it does not appear that technology will be a barrier.  Brian Arthur’s “digital economy” is already here (“The Second Economy”) with the advent of large, non-financial but also non-product enterprises such as Facebook and Google.  Arthur’s digital economy continues to build out its nervous system, with the Things of the Internet not just talking to each other, but learning as they go (“Machine Learning”) – they are taking over this conversation, and they don’t care whether it’s cloudy or not.

Post a Comment

A Holistic view of Product Quality

While managing quality within the four walls of your own operation is all well and good and totally necessary, both the market and your bottom line are demanding a more holistic, quality lifecycle approach, and in support of that aim there is a treasure trove of downstream data waiting to be tapped and exploited to improve product quality and customer satisfaction.

qualitycheck(1)The impact of this downstream customer quality data can be badly described by this less-than-perfect analogy about having a baby.  It goes like this:  With little or no incoming inspection of supplier materiel, the baby is conceived and spends the next nine months in production.  Typically born in a hospital, the baby will undergo a number of invasive outgoing inspection procedures before being released for shipment.  Depending on the manufacturer’s health care plan, the baby will come with an 18 to 24 month warranty, during which time period the proud parents will make regular pediatric dealer visits for routine inoculation maintenance.  The warranty tends to expire before the (now) toddler begins to operate in less forgiving environments, with many of the post-warranty malfunctions being handled by the nearest urgent care or emergency room  ( I have three children, and have been to the emergency room four times during their childhood – all four times with the same child.  It now would appear, however, that the involuntary software upgrades (i.e. learning, experience) that accompanied these hardware failures have at last had their intended effect in ameliorating the culpable risk taking behavior).

Then they grow up, go off to college, move out of the house, and live for ANOTHER seventy years; seventy more years of additional hardware (and occasionally, software) breakdowns.  From this article, “The Eleven Most Implanted Medical Devices in America”, the top five are:  lens implants to replace cataracts, ear tubes, stents, artificial knees, and metal screws, pins, plates, and rods.  My son, the other one, the one who never went to the emergency room once despite playing lacrosse through both all of high school and college, is in grad school studying biomechanical engineering – it looks like there’s a bright future for him in either joints (knees and hips) or cardio (stents, pacemakers and defibrillators) should he so choose.

The poorly illustrated point here is, of course, that with humans, as with manufactured products, there are numerous downstream quality issues that never get reported back to original hospital / manufacturing plant.  Just as you will likely never return to the hospital of your birth for any kind of treatment, but will be treated in a variety of specialized care centers across the country or the globe, your malfunctioning manufactured products are going to be repaired at a host of dealers, retailers and repair shops both in and out of your distribution network.  Formally, back at the ranch, you are typically only going to see a fraction of all customer returns, warranty claims and product problems. This gives a false indication of customer return rates and reasons. Failure rates can differ greatly between manufacturing and customer returns. It would be like a medical / public health system ignoring the last seventy years of a person’s life after they left the care of their pediatrician.

You can see this effect especially with consumer electronics as they get more mobile and are used in environments and for purposes that were never foreseen back in the design lab.  While some types of failures, attributed to things like poor shipping or packaging, might surface quickly and consistently enough for a reliable root cause analysis and fix, other problems with the user interface might only show up after years and years of cycles operating in previously untested environments.

As you begin to tackle this, one data management issue that will become readily apparent is the need for common failure symptom descriptions across all stages of data collection. You won’t be able to diagnose the problem if everyone is describing the same thing in five different ways.

Collecting, processing and acting on this downstream data will become easier as the Internet of Things evolves into the Connected Consumer with every product communicating continuously with the mothership throughout its life, but awareness now, along with making the best of the data you currently have or can get to, can have a substantial impact on your total quality program.  Making the best of what’s available would most certainly include social media, sentiment and text analytics, where you can assess what’s being said about the quality of your product behind your back.

While it might be difficult today for a single business to justify providing financial inducements to downstream players to incentivize them to report their findings back to the manufacturer, depending on how the Internet gatekeepers of the future structure themselves, we might see the evolution of syndicated warranty / repair information service providers similar to those that operate on the POS side.  And if it’s not you taking advantage of this information, perhaps it will be one of your competitors.

 

(My special thanks to Jeff Pink, Director of Operations at ViaSat, whose presentation at the IE Group’s Manufacturing Analytics Summit this past May provided the inspiration for this topic)

Post a Comment

Activity-Based Business Process Reengineering

I want to use SAS’ recent announcement of our Cost and Profitability Management solution as an opportunity to highlight an often overlooked but valuable application of activity-based costing: business process reengineering.   But first, just a brief description of Cost and Profitability Management’s new breakthrough capability:  In-memory model calculation.

Use-Process-Management-To-Help-Your-Business-GrowSAS’ decision to rename SAS Activity-Based Management to SAS Cost and Profitability Management wasn’t just a cosmetic alteration. In-memory model calculation is a game changer when it comes to activity-based costing.  An order of magnitude performance improvement – not just 2X or even 4X, but ten times faster. Complex models that used to take an hour or overnight to update can now be run in a fraction of the time.  In order to best take advantage of this performance increase, SAS now not only provides automatic data integration with SAS Visual Analytics for immediate reporting and deeper analysis, we also added a What-If simulation capability (in-memory, of course) that lets managers play with costs and attributes to their heart’s content, saving their variants without impacting the integrity of the underlying model.

Alright, enough with the previews, let’s get on with our feature presentation.  Activity-based costing has always had the potential to add value to a wide variety of functions and processes.  I previously covered many of these in this post (“Activity-Based Management: The gift that keeps on giving”), but just to recap a few of the key points:

  • Benchmarking processes and functions for efficiency and re-engineering
  • Identifying a weak link in the Supply Chain
  • Developing a Shared Services model
  • Matching products and customers to the best channel
  • Market and Customer Segmentation for cross-sell and up-sell
  • Identifying idle resources and excess capacity
  • Providing the “return” component of a risk/return KPI
  • Supporting a regulatory approval process
  • Activity-based planning and budgeting of resources and capacity
  • Simulate changes to resources costs and process changes
  • IT charge-backs and carbon emissions management
  • Moving from Cost Centers to Profit Centers

Not only is there value here for finance, product management, operations, marketing and IT, even the executive suite can get in on the act via that last bullet point – the ability to utilize profit as a metric throughout the organization, not just at the highest, consolidated levels.  But it’s the aspects of process reengineering touched on in the first item that I want to dive deeper into this time around.

Consider the importance and magnitude of most business process reengineering projects.  The time, the money, the systems, the consultants – usually driven by some compelling external or internal concern – an acquisition, a new product line, competition and a topsy-turvy market, the need to innovate or improve quality, to change the corporate culture, to simplify, to cut costs drastically.  To transform the business.

Much hard work and consternation will go into the restructuring decisions – what to combine, what to reduce, where to invest, what to leave unchanged.  Work flows are simulated and reimagined and remade, the work itself is analyzed and time studied and reconfigured or perhaps outsourced as core competencies are revaluated and reprioritized.

Even if the CEO has a single, clear vision of what they want to accomplish, translating that into a specific organizational redesign entails the evaluation of perhaps dozens of alternatives, often on a trial-and-error basis.  Finally, after all the hard work has transpired, the new processes are documented, policy is updated, the switch is thrown and the Newly Reengineered Company is announced to the world.

Wouldn’t be nice if you could support this reengineering process with something more quantitative than just flow charts?  Wouldn’t it be nice if there was some way to quantitatively compare the alternatives other than with high-level spreadsheets?  Wouldn’t it be nice if a detailed financial impact could be assigned to both the revamped individual processes and to the restructured system as a whole?

Well, there is such a methodology, and it’s been in the management repertoire for several decades now:  activity-based costing.  While you are redesigning the processes why not run $$$, costs, through the restructured work flows?  Ideally you would benchmark your current process first, just to assure that all your proposed changes are making headway in the right direction.  Then you would have your process reengineering team work hand-in-hand with your activity-based costing and process modelers, creating financial simulations to match the proposed process changes.

What you will get in addition to a gut feeling, an educated guess or some highly suspect and highly rounded spreadsheet numbers about how effective the process redesign will turn out, will be some fairly hard numbers on what each process will now cost and how it will impact the cost and profitability of every brand and product.  Instead of restructuring with merely the hope and goal of lowering costs by 20%, you can make that 20% a hard target and model towards it with every process redesign step you make until you get there.

I can understand, however, why this hasn’t become standard practice quite yet.  Until SAS took activity-based costing in-memory, it may have been too cumbersome for businesses needing to restructure quickly.  But now with SAS Cost and Profitability Management, two terms you would not have thought you’d hear together in the same sentence, “agility” and “activity-based costing”, are there for you to make the most of as your transform your organization into an industry leader.

Post a Comment

External data: Radar for your business

How much of your business performance (profit) is driven by external factors versus internal?  A figure of 85% compared to 15% was mentioned at last month’s Manufacturing Analytics Summit, and although I could not find the study mentioned to confirm, it feels about right to me.  Certainly more than half, right?  So, how much of your dashboard reporting and KPI metrics incorporate external data?

world food indexI have to say that I have never much liked the saying, “managing using only internal data is like driving using just the rear-view mirror’, but until now I’ve not contemplated the logical flaw in the argument nor attempted to devise a better analogy (although, if I haven’t in the past written a blog entitled, “Forecasting – Your one forward-looking piece of data – Treasure it!”, then I should have). The reality, however, is that’s not how we run our businesses nor how we utilize internal data.  Our use of internal data is more akin to IFR flying or sailing a ship – our chosen strategy gives us a goal, a direction, and we use our internal data as feedback to make course corrections.  As long as the strategy / direction is periodically evaluated and revised, and the skies / seas remain reasonably clear and calm, relying mostly on internal data is a (barely) passable approach to management.

Except of course that the skies are anything but clear, and we’re not so much flying or sailing as we are navigating a much more complex surface terrain. You can’t just set a course and go – there are obstacles, obstacles with names like ‘competition’, ‘suppliers’, ‘customers’, ‘regulations’, ‘weather/climate’, ‘financing’, ‘politics’, and ‘markets’ just to name the most common.  At a minimum you need radar, more generally you need vision (maybe even the business equivalent of ‘night vision’), and at best you’d like to be able to predict / forecast what’s over the hill and around the bend.

Google’s driverless cars provide a better analogy than does the rear-view mirror:  300,000+ miles with only two accidents, neither of which were the car’s fault (one happened while it was being manually driven by a human, and in the other case it was rear-ended at a stoplight by another driver).  It navigates busy city streets utilizing an interconnected system of frontward, backward and sideward radar, 360 degree cameras, and GPS coupled with a stored map of the local topology. If it even has a rear-view mirror it’s only there to placate the redundant human.

Dun and Bradstreet was one of the sponsors at the Manufacturing Analytics conference, and spoke about their expansion into external data collection and analysis beyond just the customer/supplier credit scoring they are most known for.  They now make available a veritable treasure trove of data that can be used for customer segmentation, prospecting and supply chain risk management.  Nielsen and others have long provided syndicated media and POS data.  From financial institutions and organizations like Thomson Reuters there is an abundance of financial data regarding interest and currency rates and commodity prices / indices. Government and trade association data is available on subjects such as weather forecasting, market size and trends, risk management, and other industry and market news.  A local North Carolina firm of my acquaintance, Enlight Research, specializes in external data targeted specifically at the needs of the Board of Directors.  A number of consulting firms are even in the business of providing quantitative political risk assessments.

If 85% of your business results are driven by factors external to your organization, this kind of information needs to be an integral part of your executive dashboard and KPI’s.  Immediately.  No if’s, and’s or but’s.  That’s the bare minimum; that’s the radar; that’s what keeps you from colliding with the garbage truck abruptly pulling out of the alley.

But by itself, it is not enough.  It doesn’t tell you where to go nor how to set or adjust strategy, goals and direction.  It lacks the ‘vision’ and ‘prediction’ components.

If the aforementioned BI / dashboard / radar elements were married to a data visualization and analytics capability, then you’d really have something.  Such a platform would allow you to combine your external and internal data (even if they are siloed) for integrated reporting, KPI’s and metrics, forecasts, root cause analysis, and exploratory/insight-focused analysis.  Because it’s unlikely that the answer to your vision quest is in column H, row 53, page 16 of either your internal or external reports.  Because the right/best metric is likely one that uses external data in the denominator (or numerator).  Because 85% of your results are going to be impacted / driven by external factors that don’t go away just because they aren’t being surfaced on your dashboard.

A couple of weeks ago I wrote about developing, managing and changing corporate cultures (“Changing corporate culture is like losing weight”).  I talked about the various types of “cultures” an organization might aim for, such as: learning, analytical, innovative, customer-centric, quality, risk-taking, agile, or continuous improvement.  But I left one out, an important one.  If I had to pick just one culture to focus on, it might very well be one built around the awareness and usage of external data as a primary component of the organizations decision making process.  It's not just about being data-driven - it's about being driven by the right data.

Post a Comment

Analytics – Easy as One, Two, Tree

busine14Insights from decision trees and other basic analytic techniques show that you don’t always need complex analytics to solve business problems and add value.  This was the message from Dr. James (Jim) Foster, Director of Research and Process Development, Archer Daniels Midland (ADM), at last month’s inaugural IE Group ‘Manufacturing Analytics Summit’ in Chicago, which I had the great privilege to chair for both days.

Process manufacturers like ADM face a different production and quality challenge than their discrete manufacturing counterparts.  It’s not so much keeping your parts and components within tolerance as it is keeping your entire process within its operating limits. Neither do you move discretely from state or condition “A” to condition “B”, from an unfinished blank to a finished part, but instead the condition of the work product changes continuously as it progresses in time through the process.  Most importantly, there is typically not just one acceptable, final product specification, but a whole set of process specs that vary as the reactants move through the system, or through time even if the physical product remains stationary, perhaps in a vat or mixing chamber.

Jim started with a fairly basic process by describing how he and his team go about using decision trees to analyze and search for the root causes of process production problems, which I will greatly simply for the purposes of this post.

Imagine you are processing corn into one of dozens of its many possible end products, and that you need to regulate and control three production factors:  temperature, ph level, and enzyme concentration.  Research has determined that each of these three parameters must remain within certain basic limits in order to obtain a satisfactory final product: the temperature must not vary by, say, more than two degrees either way, the ph by 0.2, and the enzyme concentration by 2%.

You are of course monitoring this process, collecting data either continuously or at regular intervals, and either manually or automatically making corrections to keep everything copacetic.

But experience tells you that this is not all there is to the story.  Years of manufacturing knowledge will have shown that in combination, these three factors must further conform to even tighter tolerances for the process to succeed.  In other words, compared to when each factor is evaluated on a stand-alone basis, when combined and managed simultaneously, each tolerance might need to be cut in half.  Otherwise, experience shows that the viscosity may sometimes increase just enough to slow down the flow and gum up the works, even though by themselves the three parameters never departed sufficiently from the norm to degrade or destroy the end product, and therefore no alarms went off.

Example:  Let’s say that the mixing unit has lately been gumming up and stalling, even though no individual temperature, ph or enzyme alarms have been recorded.  A four-level decision tree analysis of the data (top level being the combined total, with each subsequent level separating out one of the three control factors) might show that 85% of these high viscosity incidents occur when both temperature and ph are operating at the very high end of their range, say 95%, and when the enzyme concentration is at the very low end.  You have just learned something valuable and can adjust the operating parameters accordingly, perhaps resetting all of them to 85% of their previous values, ensuring that now, even if they are all at their range extremes, the viscosity will remain acceptable.

For experienced production engineers this would be a relatively easy problem that typically wouldn’t require even something as simple as a decision tree to diagnose.  But real world process production problems are seldom this simple.  First - oh, if there were only three variables!  The complexity increases exponentially with each additional parameter, and once you get above half a dozen or so, nothing remains intuitive.  Having the data and the analytic tools becomes imperative.

Secondly, real world problems don’t always have just one, single root cause, something even the most experienced of production engineers can forget.  Multi-causal problems, Jim stated, is where having the discipline of an analytic approach really pays off.

Jim shared with us a second, more complicated example of the same sort - high viscosity causing machine shutdowns.  Production engineers used to looking for only a single cause were baffled that none of their corrective measures seemed to work.  Jim’s decision tree analysis of the production data showed that it was not just one path down through the decision tree, but THREE separate and independent paths, that were leading to the production problems. Changing parameters to solve just one of the problem paths was exacerbating the issues caused by the other two routes.  It was only when all three of the root causes were addressed simultaneously that the production process returned to normal.

Decision trees are only one of many basic analytic tools available to be put to good use like this.  SAS Visual Analytics provides you with an entire toolbox, an entire workshop even, of easy-to-use analytic capabilities, such as autocharting, which automatically chooses the graph best-suited to display the selected data, or the "What does it mean" capability, which automatically identifies and explains the relationships between variables.

Whether you are process or discrete, or any industry for that matter, you’ve got the data and the industry expertise, SAS has got the tools - all you need is the analytic discipline.

Post a Comment

Management productivity

Productivity.  How important is it?  As Nobel Prize winning economist Paul Krugman puts it, “Productivity isn’t everything, but in the long run it is almost everything.”

solution-productivity-bodyimageTypically the focus is on labor productivity, with the post-war results nothing short of phenomenal.  Total U.S. labor productivity doubled from the end of WW II to the mid 70’s, and has doubled again since then  (wages are a different matter, keeping pace with productivity through the oil shock of 1973-74, but then stagnating in the four decades since).  Narrowing the focus just to manufacturing productivity in the United States, real manufacturing output per worker roughly doubled during the 30-year post-war period ending in the mid 70’s, then doubled again in just the next 20 years to the late 90’s, and has doubled even once more in the last 15 years.

But what about the productivity of the other factors of production, of raw materials, capital and management?  As for raw materials, you could say that the oil shock of 1973-74 changed everything.  Per capital steel consumption abruptly leveled off and has been on a slow decline in the developed world since the mid 70’s.  The same holds true for oil and most of the other basic raw materials as conservation, reduction and recycling took hold.  The developed world has been trending “green” ever since it was forced to sit in those long gas lines that awful winter.

I will have to leave any discussion of the productivity of capital for another time, as its behavior since the dot.com bust and repeal of Glass-Steagall in 1999-00 would best be described as schizophrenic.  Still, its steady post-war increase up through 1999 would seem to be in accord with the overall labor and raw material trends.

That leaves us with the fourth of the four factors or production – management.  A cursory review of the literature shows no significant studies of management productivity.  It would appear that it’s never been measured by anyone, perhaps not even operationally defined.  But that doesn’t mean we can’t speculate. What would you say – has management productivity increased over the past 35 years?  In line with labor productivity?

One thing I do know is that like everything else, management has gotten more complex.  First there’s the product technology, and then there’s the infrastructure technology.  What we make has gotten more complex (computer controlled, fuel-injected engines, anyone, or microwave ovens?) and the information technology we use to keep track of our operations has gotten more complex as well.  And now with more remote and telecommuting employees, basic supervision hasn’t gotten any easier either.

But if we were somehow measuring output per manager, would that metric be getting any better?  Span of control probably hasn’t changed much in 50 years.  Productivity gains in the service industries has significantly lagged the manufacturing sector (another good story for another time).  And who hasn’t heard stories and case studies of ‘bloated management’ structures being trimmed, flattened or decimated by the new sheriff in town.

As I made the subject of my very first “official” Value Alley post, “Stuck in the Middle”, it is middle management that has the most difficult management job. It’s here that the resources of the company meet the needs of the customer, and yet they are often ill equipped and poorly supported and trained for this role, divorced from the centers of power and strategy, and burdened with high expectations and stretch targets.

What are we doing to help them, and by extension, help our respective organizations?  What are we doing to improve management productivity?  While there are surely a dozen or more areas we could dissect and focus in on, here are four themes that I would make my initial priority.

  • A robust BI platform.  Attack your BI initiatives with the same determination you devote to operational automation.  Do they have everything they need on the dashboard?  And by everything, I mean everything automatically loaded and available.  Not 8 out of 9, with the ninth requiring manual input from an offline spreadsheet.  The manual steps are productivity killers.  Spreadsheets are fine in the domain of the finance or operations analyst, but suck the life out of management productivity.
  • Data integration.  Upstream and downstream.  Cross functional.  Internal and external.  Vendors and channels and customers.  NO SILOES!  If management needs to compare costs with shipments and inventory by territory and customer, and needs to get into five different systems or siloes to do that, you’re doing it wrong.
  • Process management.  As I discussed in my early “Stuck in the Middle” post, process is the realm of middle management – process design, process management, process monitoring.  Process is what enables repeatable value creation.  Analytic support for process modeling, simulation and decision management is key to improved process management productivity in a complex environment.
  • Workforce Analytics.  It’s 9:00 a.m. Monday morning; do you know where your employees are?  As I pointed out in this post, “Strategic Workforce Planning”, you probably have more system and IT resources invested in tracking office supplies and spare parts than you do in managing your critical human resources.  For most businesses, employee-related expenses (salaries, benefits, taxes, training) represent the single largest cost category. Workforce management is the one area that has adapted the least to a business environment of increased productivity and complexity.

Truthfully, though, it’s that last one, people management, that makes improving overall management productivity such a hard nut to crack.  Just last week I was advising a young grad student in the biological sciences, a friend of my son, on her desire to follow that up with an MBA (it's a pleasant surprise when your college-age children not only accept your advice, but recommend you to their friends).  What I told her was that “of the three primary elements to be “managed” – people, technology and money – people are by far the most difficult component.  As you get older and progress in your career and move upward and take on more and more responsibility, the biggest, toughest challenges you will face will have nothing at all to do with your science, with your technology, or with your cash flow.  The biggest challenge will be the people.  And the most important classes you will have taken will turn out not to have been calculus or biochemistry or corporate finance, but psychology, sociology, anthropology, philosophy and literature.  It will become WAY more important to understand the motivations of Iago or Yossarian than to be able to calculate the genetic variation of micro-ecosystems across time.”  Shakespeare and Freud on your dashboard (or at the very least, in your core skill set).

Post a Comment

Changing corporate culture is like losing weight

Why is it so hard to achieve lasting, significant change in your corporate culture?  Because your organization is like a living organism, an organism that wants to maintain homeostasis against a changing environment.

My good friend Claire Breeze, co-founder of Relume and co-author of “The Challenger Spirit”, recently invited me to participate in a  creative day focused on finding out what gets in the way of adopting an approach to learning that is social, connected, informal and immediate – what I’ll call a ‘learning culture’ as shorthand.  In preparation, my starting point became:  What gets in the way of ANY type of cultural change?

There are any number of ‘cultures’ that an organization might adopt:scale

The first step is to draw a picture of what the new, desired culture would look like. What will success look/feel like?  You need to draw this picture in terms of ‘behaviors’, not results.  What half-dozen or so key organizational and/or individual behaviors will be different in your target culture from “how things get done around here” today?

What I mean by focusing on behaviors would follow from the example of coaching a baseball team.  You want to win more games, make the playoffs, win the championship.  Those are goals and, after the fact, results.  But that’s not what you work on in practice, you don’t specifically practice “winning”.  The key behaviors of a winning team might be high batting average, low ERA, and a good fielding percentage.  So as a team in practice you work on turning the double play, the pitcher covering first base on an infield hit, executing a hit-and-run or a double steal, and individually you work on hitting mechanics or mastering a new pitch.  And then there’s the scouting, learning where to best pitch an opposing batter and where to play him in the outfield.

Bringing this back home, for a customer-focused culture, your key behaviors might be: 1) everyone who comes in contact with a customer has available to them a consistent, up-to-date set of customer data, 2) flexibility in the processes that directly affect a customer and the decentralized authority to implement such customer-friendly variations, and 3) having the right assortment, size, model, color, at the right time and the right place.

Next comes the levers at your disposal to effect such change.  A list of the most common management tools would likely include:

  • Organizational structure and design
  • Rewards, incentives, recognition and performance management / metrics
  • Tools, resources, systems, data and processes
  • Hiring / selection / training / orientation
  • Leadership / stories / heroes / values / communication

If you’ve made it this far, that was the easy part.  Countless organizations have implemented such change management plans only to see little in long lasting change and results.  Why?  Because you haven’t addressed the feedback loop, the thermostat, that exists in every organization to maintain normalcy and stasis against a changing environment.  You’re trying to enact change against organizational processes that have evolved to specifically minimize change.

When you think about it, it’s a normal and expected response.  Things get done around here because disruptions are minimized.  I touched on this in a previous post, “Metrics for the Subconscious Organization”, where I pointed out that your organization pretty much runs itself on auto-pilot; most of the time it would hardly notice if you took a couple of months off.

It’s much like attempting to lose weight.  You try to make the behavioral changes - counting calories, eliminating fat or carbs, smaller portions, going vegan or gluten-free - but your body thinks your brain is trying to starve it, and so it reacts accordingly.  It’s what makes progress and permanent weight loss so difficult.  You have to reset the thermostat in order to achieve long-term results.

Your organization is the same way.  Yes, there might be some deliberate, sociopathic “blockers” out there, but by and large your employees are just trying to keep the wheels turning the best way they know how.  In some circumstances, using the starvation metaphor, they may even sense a threat to themselves, their positions, advancement, power or career, and they are simply doing what comes naturally and has likely worked well in the past – protecting themselves by resisting change.  It should come as no surprise that they would try to 'expel the invading foreign body' (cultural change) just as a human body would react to a bacterial infection - if stasis and health is to be maintained, this change in/from the environment must be dealt with.

Prescriptions for resetting the thermostat are hard to come by; if it were otherwise, change management, and weight loss, would be easy.  But, like biological organisms, your organization is also quite capable of evolution and adaptation in the face of a changing environment. I might suggest these approaches (likely reinforced in combination):

  • Change the environment.  Align your levers – organization, tools, incentives, training, stories -  so as to encourage evolution in the desired direction.
  • Disrupt the old environment.  Make it so that there is nothing to go back to.  This regularly happens with acquisitions, and can likely be done internally as well.  Create the pain often necessary for change – the “burning platform”.
  • Change the context.  Reinterpret the organization’s history, mission and story. The old behaviors only make sense in a particular context.  This means creating/providing a new context, and phasing out the old context such that the old behaviors are no longer effective or rewarded.  Again - apply your levers.
  • Address the safety issue:  Keep Maslow’s Hierarchy of Needs in mind and don’t just focus on the valiant goals of the upper stages; address your employees’ basic need for security, make it clear they still have an important role and a home in the new environment.

Claire, of course, already knows all of this, but I had to work through it myself to arrive at a point where I could address the specifics of the learning culture task she set me, such as:

  • Risk-taking, experimentation and trust
  • Systems-thinking
  • A focus on personal mastery
  • Shared learning, a shared vision and goals
  • Insistence on spending the training budget, development and training as more than an afterthought or a tick-in-the-box, a learning “contract” as part of performance management
  • A focus on people as unique, creative individuals instead of as assets or costs to be minimized
  • Leaders setting the example and showing commitment (and thus showing that learning is “safe” – when was the last time you heard of a VP taking a company-sponsored course?)
  • Post-mortems that focus on what was learned and not just what went wrong and who’s to blame
  • “Hero stories” that link success to past learnings
  • Clear objectives that align learning and training to business strategy
  • The right environment, perhaps personally tailored to individual learning/team member styles, valuing differences in what is learned and how it is learned
  • Build learning into the work environment / process - make knowledge sharing an organizational habit

Whatever type of culture you want to create – customer-focused, data-driven, quality, learning -  go through this process, identify what the new target culture would look like, what behaviors would become commonplace, what levers you have available to effect this change in behavior, what new tools, data, processes or systems might be required, and most importantly, what is your plan to overcome the stasis; what are your tactics to reset the organizational and cultural thermostat.

Post a Comment

Agility and the Analytic Sandbox

Analytics gives us not just the ability but the imperative to separate our planning activities into two distinct segments – detailed planning that leads to budgets in support of execution, and high-level, analytic-enabled business/scenario planning.

My critique of Control Towers in this blog last time led me not only to consider the role and relationship a control tower  might play in the planning process, but also to evaluate the overall planning process itself.  This appraisal has in turn caused me to reassess the approach I introduced some time ago in this post, “Rolling forecasts, or Who ordered that?” and to restructure the diagram representing my view of the ideal business planning process.

In that previous structure I envisioned a three-level process structure, with the Strategic Plan and the Forecast at the highest level, informing an 18-month rolling PLAN (not forecast) in the middle tier, driving the Budget(s) at the lowest level.

In my last post I somewhat castigated the emerging universal control tower approach, which purports to solve practically all your business problems including hunger and world peace, an approach where the overstuffed control tower included capabilities spanning from analytics to simulation to alerts to dashboards.  I tried to make the case that the control tower is fundamentally tactical and best suited to supporting operational execution – it’s not a strategic platform.

But still, there does seem to be the need for a control-tower-like capability in support of strategy and high level planning, an agile capability that mirrors at the strategic level the executional agility a control tower provides at the operational level – an entity which I am going to label the Analytic Sandbox.  Not a new concept to be sure, but refining the definition of its proper role does help to clarify its relationship to the overall business planning process.

Sandbox1The key insight is to keep this analytic package together, but to deploy it where it does the most good, not in support of execution, but in support of scenario planning.  This in turn requires dividing our current monolithic planning process in two – the detailed single-scenario plan that eventually spawns an equally detailed budget, and the high-level business planning where agility has recently become paramount if not mandatory.  Resident inside of this business planning process is the Analytics Sandbox – a combination of agility with the power to know.

Elements of high-level Business Planning with the Analytic Sandbox:

  1. Scenario Planning (for options, pessimistic/optimistic, best case/worst case, etc …)
  2. Capital Planning
  3. What-If planning, Pricing
  4. Activity-Based Budgeting
  5. Data Exploration / insights (i.e. Tell me something I don’t know)
  6. Simulation
  7. Risk Management
  8. Strategy and Planning Dashboard  (linking strategy with objectives, goals and metrics)
  9. Forecasting / Predictive analytics
  10. Marketing Management / Social Media Analytics
  11. Supplier, Facility, IT, Human Resource and Capacity Planning
  12. Product Planning

Elements of detailed business planning and budgeting:

  1. S&OP / Supply and Demand Planning
  2. Optimization (inventory, production, logistics, marketing, etc …)
  3. Disaggregated forecasts
  4. Operational plans (PLM, production control, procurement, logistics, after-market service, maintenance, etc …)
  5. Departmental, Project and Program Budgets / Resource Allocation

Elements of Execution Management:

  1. Operational Dashboards
  2. Control Tower
  3. Quality Control
  4. Measurement, Metrics / Closed-loop and OODA Feedback (to strategy and business planning)
  5. Event Stream Processing / Decision Management
  6. Digital Marketing

While both concepts enable organizational agility, what I think the difference is between a Control Tower and an Analytics Sandbox is the scale of the response.  The Control Tower is about the agility to adjust near-term operations in order to meet customer expectations and obligations; the Analytic Sandbox is about the agility to adjust organizational strategy and associated business plans in the face of market forces.

We are accustomed to being agile with our operational execution – no organization gets through the day without making dozens if not thousands of little adjustments along the way.  Whether or not we have a formal Control Tower, we have been doing control-tower-like activities forever.  What has not yet become commonplace are the tools and approaches that allow us to extend that agility to the larger scale and scope of the entire organization and its strategic concerns.  Not commonplace yet, no, but available, YES – Analytics and the Analytic Sandbox, most definitely, YES!

Post a Comment