If you have been following this thread for a while, you may notice a theme that I keep bringing up: data virtualization. I'm trying to rectify a potential gap in the integration plan involving understanding the performance requirements for data access (especially when the application and database services are expected to respond in real time) with the need for scalability as the number of consumer applications grow.
I believe that under the right circumstances, data virtualization tools not only help address the performance and scalability issues, they also help drive a standardized representation of shared master data via canonical models that face up to the database management layer.
Therefore, that implies that data virtualization is part and parcel of the access services layer. At the same time, though, we have to be aware of the different types of applications that are potentially accessing the data, ranging from operational or transaction processing systems, analytical applications, and business intelligence (BI) front ends for querying and reporting.
Engineering this layer involves developing the canonical models to represent the shared master data coupled with the configuration of the virtualization and federation tools to point to the master data repositories. When properly configured, all master data transactions executed through the data manipulation layer will be appropriately serialized to ensure cross-application consistency. Lastly, a complementary set of schemas must be provided to support the BI and reporting consumers as well, especially if there are different end-user visualization tools being used.