Services approach to master data integration

0

In my last series of posts, we looked at one of the most common issues with master data management (MDM) implementation, namely integrating existing applications with a newly-populated master data repository. We examined some common use cases for master data and speculated about the key performance dimensions relevant to those use cases, such as the volume of master data transactions, the need to maintain a respectable response time and synchronization challenges for ensuring data currency.

Our conclusion was that we could develop a services model that deployed the different types of common master data functionality such as entity search, retrieval of an entity’s master record or the management of entity relationships. There is still a need to develop the services layers that comprise the operating model, and this means envisioning the architectural decisions to be made that can ensure that the levels of MDM performance satisfy the expectations from the community of master data consumers.

The reason this can be somewhat challenging, especially in a large enterprise, is that the de facto architecture of the environment may pose a variety of constraints that prevent the compliance with the expected levels of performance. For example, a customer master repository may have been instantiated on a standalone server with limited network accessibility. At some point, an online application may want to directly search the customer master to identify known customers and access their behavior profiles. This may imply thousands (if not orders of magnitude more) of simultaneous users banging up against the master index and repository in a way that can’t be handled by the standalone server.

This is just one contrived example, but demonstrates how the framework for serving master data must accommodate systemic needs that extend way beyond batch consolidation and index creation. The architecture must be flexible enough to maintain a synchronous master repository and index, and over the next few posts we will look at a tiered approach to enabling master data services, consisting of (from the bottom up):

  • A data layer for managing the physical storage medium for shared master data;
  • An access layer that can enable access to the data layer in a way that satisfies the load, response time and bandwidth requirements of multiple applications running simultaneously; and
  • An application services layer comprising the core services common to multiple applications and business processes to provide business-oriented capabilities supporting production applications.

In essence, this tiered model is intended to provide a “plug-n-play” approach to design and development that reduces replicated effort while providing extensibility for performance purposes without jeopardizing the integration process. In our next set of posts, we will look at these layers from the top down.

Share

About Author

David Loshin

President, Knowledge Integrity, Inc.

David Loshin, president of Knowledge Integrity, Inc., is a recognized thought leader and expert consultant in the areas of data quality, master data management and business intelligence. David is a prolific author regarding data management best practices, via the expert channel at b-eye-network.com and numerous books, white papers, and web seminars on a variety of data management best practices. His book, Business Intelligence: The Savvy Manager’s Guide (June 2003) has been hailed as a resource allowing readers to “gain an understanding of business intelligence, business management disciplines, data warehousing and how all of the pieces work together.” His book, Master Data Management, has been endorsed by data management industry leaders, and his valuable MDM insights can be reviewed at mdmbook.com . David is also the author of The Practitioner’s Guide to Data Quality Improvement. He can be reached at loshin@knowledge-integrity.com.

Leave A Reply

Back to Top