Stuck on the Fort Duquesne Bridge in Pittsburgh one afternoon during rush hour, I was pondering a recent conversation with an old client. The client, a mid-sized financial institution, is considering master data management (MDM) to help them better understand their customer relationships.
The problem is that they had implemented CRM a few years back, and it was not considered to be a resounding success. While they did gain needed workflow capabilities from the application, it did not provide the 360-degree customer view that they were hoping for.
How, they asked, can we assure a better outcome this time?
To answer that, I suggested reviewing how the company got there in the first place. A diverse set of issues and roadblocks combined to derail their original CRM objectives. For starters, some of their key legacy systems were third-party applications, hosted outside the bank. If they wanted to increase the amount of data and frequency of feeds from these systems, that meant big price tags and four-to-six month time frames.
Even worse, the vendor contracts were owned by people outside the CRM implementation team, and navigating the bank’s own internal bureaucracy was frustrating and time consuming. Modifying the aging customer information factory (CIF and other internal systems to allow for real-time connections with CRM also proved to be a constant problem. In the case of the CIF, they ultimately had to require users to enter new customers twice, once into the CRM and then again into the CIF. As you can imagine, that did not go over well.
Compounding these technical issues (which ultimately could have been solved with deep pockets and even deeper patience) was the culture. Past business stakeholders had taken a conservative approach to customer matching rules, and the team could not forge a consensus on changing these to take advantage of new technology. Also, when users did identify issues with the customer information in the system, finding business owners to shepherd the quality assurance process was difficult.
As I waited, unmoving in the traffic, I had an epiphany. My client’s CRM system was strikingly similar to the very bridge I was sitting on. This iconic structure is both well loved (or hated depending on the time of day in which you cross it) and essential to the Pittsburgh transportation infrastructure. It connects downtown and parts south and east to our North Shore – home to the Steelers, the Pirates, a vibrant entertainment district, and many expanding neighborhoods.
However, it was not always this way. For seven long years after construction began, it was the laughing stock of the nation, known universally as “the bridge to nowhere.” Completed (almost) in 1963, it sat unused until 1970, spanning the Allegheny River but stopping in mid-air just short of the north river bank because the builders failed to secure access rights for the ramp connecting the bridge to land.
Ditto for my old client’s CRM system, although to be fair, it is not quite as useless as my bridge was. While the users loved the functionality and screen design of the new system, the data inside it left something to be desired, primarily because the builders (or in this case the implementers) were unable to secure the access and manipulation rights to the data that they needed to populate the system. It was a bridge to nowhere from a data standpoint.
So back to the original question – how to protect MDM from a similar fate?
The good news is that I think there is a viable solution to their problem – a better one than “take two aspirin and call me in seven years.” What I recommend is that they launch a highly targeted data governance initiative engineered around a small controlled project – in this case MDM (they have no formal governance today).
In my next blog post, I’ll discuss how to pull those elements together and build a data governance strategy to support an MDM effort.
1 Comment
Pingback: Bridge to everywhere: data governance and MDM - Information Architect