Data architecture and IT infrastructure – A bank should design, build and maintain data architecture and IT infrastructure which fully supports its risk data aggregation capabilities and risk reporting practices not only in normal times but also during times of stress or crisis, while still meeting the other principles.
In the previous post in this series on BCBS 239, Brooke Upton discussed Principle 1 and outlined the need for accurate trustworthy data and some steps you can take to meet the data governance requirements. In this post, I’ll discuss the need for data consistency and the place an integrated infrastructure plays.
The Basel Committee on Banking Supervision’s motivation for Principle 2 is to stress the importance of technology in meeting these requirements. The view of the committee is that risk data aggregation and reporting (RDAR) systems are critical features of a financial institution. So much so that they play a role in business continuity planning and ought to be subject to business impact analysis. It is obvious and is reinforced, with hindsight of recent crises, the harmful effect upon firms that lacked these critical capabilities.
In particular, the principle calls for data consistency. Institutional data dictionaries (metadata repositories) are important for ensuring consistent definitions, as well as documenting resources and the relationship between those resources. For example, one should be able to search the term “liquidity” and see its definition as well as all other related terms (i.e. LCR, NSFR, etc.), tables, business rules, dashboards, processes, models and associated individuals (such as the liquidity risk committee members and data stewards). Consistency in identifiers and naming conventions for legal entities, counterparties, customers and accounts are also expected.
Highly integrated systems are required to gain this level of consistency and clarity. In the above data dictionary example, data must be fed from a company directory, data models, business rules and risk model repositories as well as ETL jobs. This is not a trivial endeavor, and the master data management (MDM) and ETL systems have to be up to the task. The enormity of the task becomes clearer when considering that legacy systems, point solutions, document management system and internally developed web systems may all be running competing database management systems or technologies with different semantics for resources access. Even when those resources house identical data as is normally the case when data is fed to downstream niche systems.
Communication and controls become ever more important in light of these increased consistency and integration demands. Owners of these systems (both business and IT) must be able to verify and certify all data entering and leaving their systems. Failed audits are of concern for those institutions that cannot show the who, what, when and why of any data making its way into regulatory reports.
There is also the issue of aggregating data. It is very common to have point solutions from niche vendors performing a specific area of risk. The issue then becomes integrating risk measure across an enterprise that may have used differing assumptions, horizons or market data sources. Having an aggregation engine that can combine this very complex data to show exposures at the aggregate level and drilldown to the most granular level is crucial.
SAS offers an intuitive platform that addresses each of the issues. SAS’ risk engine can aggregate data from disparate platforms. Risk data dictionaries document all terms, hierarchies and related resources with ease of exploration and search. And MDM provides the integration of it all. SAS’s platform integrates with the most prevalent competing platforms.
In the next post, I will discuss the requirements in Principle 3 - Data architecture and IT infrastructure. In the meantime, learn more about how SAS can help you address BCBS 239 compliance.