Considerations for master data application services


In a tiered approach to facilitating master data integration, the most critical step is paving a path for applications to seamlessly take advantage of the capabilities master data management (MDM) is intended to provide. Those capabilities include unique identification, access to the unified views of entities, the creation of new entity records, matching and duplicate analysis, and reviewing and managing cross-entity relationships.

Since I shared the details of these usage scenarios and core functions in last month’s series, I can focus this note on the thought processes driving the determination of which specific services are needed and where those services are invoked.

As we recall, the conventional approach to MDM has been to focus on data consolidation first, followed by the development and availability of a broad array of (generally nondescript) access and update services (succinctly, “get” and “put” routines). There is a need for these types of services, but from the application’s perspective, accessing a master record and potentially updating one or more attributes is not only too granular, it creates a significant burden to any application owner with a desire to use the master data repositories.

An alternate approach looks at the integration points within the application where master data can improve the process. The simplest example might involve recognizing a customer.  A naïve sales application may attempt to look up customer data to determine if an individual has already purchased one of your products. In many instances the application can recognize an individual because using exact matching will locate a known customer’s record. However, introducing MDM’s probabilistic matching will increase the hit rate for customer searches within that sales process, thereby opening more opportunities for cross-selling.

There are other typical application integration patterns, such as adding new information to a customer record, analyzing social connectivity, determining who is the authority within a household or generating unique identifiers (which in its own right can be an extremely complex process). The basic idea, then, is to map the business process and then walk through each step to review the data domains that are touched as part of the step. If the step incorporates information about entities managed as master domains, note whether the access or touch point matches one of our common master data services patterns. If so, ask if accessing a unified view of the entity’s data would improve the process’s overall outcome. If so, note that touch point as an integration point along with the master data usage scenario.

At the end of this process review, you will have identified the integration points along with the types of services required. This will provide the scope for designing the master data services supporting the application layer.


About Author

David Loshin

President, Knowledge Integrity, Inc.

David Loshin, president of Knowledge Integrity, Inc., is a recognized thought leader and expert consultant in the areas of data quality, master data management and business intelligence. David is a prolific author regarding data management best practices, via the expert channel at and numerous books, white papers, and web seminars on a variety of data management best practices. His book, Business Intelligence: The Savvy Manager’s Guide (June 2003) has been hailed as a resource allowing readers to “gain an understanding of business intelligence, business management disciplines, data warehousing and how all of the pieces work together.” His book, Master Data Management, has been endorsed by data management industry leaders, and his valuable MDM insights can be reviewed at . David is also the author of The Practitioner’s Guide to Data Quality Improvement. He can be reached at

Leave A Reply

Back to Top