Is a calculation a model? Is a spreadsheet a model? Is the computer-based implementation of a mathematical solution to a problem a model?
These are questions that rest heavily on the minds of bankers these days. "Why?" you ask.
The answer is found in federal regulatory guidance on model risk management. If you are a banker, depending on your definition of what constitutes a model, you may or may not need to do some extra work.
Let me explain, but first some definitions are required. My source document is SR 11-7 guidance from the Federal Reserve Board, issued April 4, 2011. (The OCC issued similar guidance for national banks, federal savings associations, and thrifts).
Regulatory Defn: Model -- "a quantitative method, system, or approach that applies statistical, economic, financial or mathematical theories, techniques and assumptions to process input data into quantitative estimates." This definition encompasses "quantitative approaches whose inputs are partially or wholly qualitative or based on expert judgment, provided that the output is quantitative in nature."
That is a pretty open-ended definition, which is not surprising since oftentimes regulatory guidance leaves room for interpretation that varies due to differing facts and circumstances.
Models are useful things to have around and bankers have come to rely on them to a great extent for certain applications, some of which expose the bank to significant risks. Predictive models fall into this category. Examples include loan approval using credit scoring, hedging models using swaps and options to manage the balance sheet while protecting liquidity, determining capital adequacy, etc. Regulators have come up with a definition for this risk exposure.
Regulatory Defn: Model Risk -- "the potential for adverse consequences from decisions based on incorrect or misused model outputs and reports."
Banks are tasked by the regulators to manage model risk "both from individual models and in the aggregate." That means they need to decide what falls into the model category, and which models have the greatest potential adverse impact. Relative to model risk, bankers need to be able to:
- Recognize it
- Quantify it
- Know when a model goes wrong
- Know what to do when a model goes wrong
- Know when additional capital should be allocated to cover it
On the last point, banks have regulatory capital models and they perform stress testing to demonstrate the degree to which their capital levels are sufficient under a variety of economic and market conditions. In March of last year, the Fed published its examination methodology and results relating to a comprehensive capital analysis and review (CCAR) program for the nineteen largest and most complex bank holding companies in the US. Stress testing for CCAR is an area where technology can help bankers to better manage their process through workflow automation, capital planning framework deployment, iterative scenario results aggregation, visualization, exploration and reporting.
Models need to be scrutinized and challenged by staff who are not involved in their development or use -- i.e. experts who do not bear any negative consequences for finding fault with a model or the manner in which it is being used. There is a significant cost to performing that exercise, which is referred to as model validation.
Regulatory Defn: Model Validation -- "the set of processes or activities intended to verify that models are performing as expected, in line with their design objectives and business uses."
Fundamentally, model validators must determine whether or not a given model is fit for the purpose intended. Model validation is not a purely statistical exercise. That is because almost all input data used in business modeling is biased due to policy rules, inconsistencies in business practices, differences across markets and geographies, data collection and sampling rules pertaining to what gets included versus excluded, how values are translated and standardized, inconsistencies in data definitions, variation in interpretation of the data, and so on. Model validators must understand the business environment in which the model will operate and the business objectives that they were designed to support. They must also gauge the uncertainty due to unobservable or unreliable inputs.
In addition to its inputs, the validity of a model hinges on its processing, and outputs. The processing depends upon choice of algorithm (i.e. solution method), calibration or tuning parameters, a set of assumptions, a set of limitations, agreed upon objectives, etc. The output consists of estimates and error bounds, and business reports and sufficient information to allow for outcome measurement, monitoring and assessment sufficient to gage robustness, stability, and accuracy.
Finally, let's not forget about usage. Model's that are perfectly developed and implemented can be applied to the wrong population of customers, or wrong type of financial instrument, or to the wrong set of transactions, etc. Model validators must pay careful attention to all of these areas, again keeping in mind the business context as they perform their assessments.
Banks must develop and maintain effective model governance. Doing so entails the creation of a model risk management framework that is made up of a supportive corporate culture and values, clear vision articulated from executive management, risk appetite, policies, procedures, testing regimen, validation process, well-orchestrated lines of defense against failure to detect problems, clear definition of roles and responsibilities and resource needs, and documentation. An inventory of models should also be maintained and sufficient resources allocated in order to ensure models are understood, the risk exposures they represent are quantified for present and future operation, and all models and their input data and key underlying assumptions are continuously verified and properly managed and maintained.
This is a tall order, and it rightfully involves the board of directors and executive management who must establish and direct an enterprise-wide program that addresses model risk management (MRM). How can directors decide on the proper allocation of resources to MRM? Much of the answer is coming from the regulators these days, but practicalities and experience will ultimately dictate what is truly necessary and what constitutes a waste of time. I suspect that regulatory expectations will continue to rise until such time that banks can demonstrate that the bar needs to be lowered.
Bankers need to have a robust model risk framework in place that promotes consistent model risk management standards across the firm that:
- Is efficient
- Reports on the entire program (aggregate as well as detailed performance measures from the bottom-up)
- Establishes appropriate limits on model risk
- Can perform stress testing that encompasses extreme use cases
- Facilitates risk mitigation and measurement of model risk before and after mitigation
- Measures residual model risk directly based on model performance and traced to sources of risk
- Avoids cherry-picking and overly optimistic projections
Overall model complexity is on the increase and banks are taking greater model risk due to increasing reliance and expanded use of models to:
- Value instruments and positions
- Quantify exposures
- Measure and manage all forms of risk
- Refine and further automate credit underwriting models
- Determine capital levels and reserve adequacy
- Perform profitability and performance analysis
The trend of increasing complexity will likely continue fueled by greater computer processing power, more sophisticated and powerful business solution software, the pace of change in business, and the ever-present pressure for better and faster decisions.
Model Management & High Performance Computing
In response to these demands, technology can play an important supporting role to hasten the collection of proof points for the value of MRM, which transcends regulatory mandated testing to tangible business benefits stemming from model and process improvement.
An example that comes immediately to mind is where, due to time pressure to produce models quickly, developers may not go the extra mile in performing variable screening, sub-setting and clustering in order to determine the best choice that exhausts all possible cases while making the best business sense. Often, the extra time spent to re-visit this area pays significant dividends. To be sure, creating a predictive model entails far more that throwing a few hundred variables into a stepwise selection algorithm. This is because issues such as quasi-complete separation, non-linearity, and redundancy (co-linearity) crop up, which can complicate model interpretation, affect convergence of the estimation algorithm, and ultimately lead to incorrect decisions regarding variable selection. These issues can be addressed using a variety of techniques, such as collapsing the problem based on chi-square reduction in association testing (Greenacre's method), the use of logit plots and Spearman and Hoeffding correlation coefficients to screen model inputs, ranking of alternative models based on the Bayesian Information Criterion (BIC), and a variety of other statistics (AIC, adjusted-R2, area under the ROC curve, and the Brier score) just to name a few.
There are many other remedies that can be explored and may prove worthwhile, given some extra time. Additional areas where improvement can be realized include strategies for splitting data for model training and validation, and model tuning and fitting. If you are interested in learning more, SAS Education offers a course entitled Predictive Modeling Using Logistic Regression that covers the bases. I invite you to enroll!
I suspect adoption of technological advances to spur MRM efforts will occur on an application-by-application and bank-by-bank basis. Those who choose to invest in technology will find there is significant help available relative to:
On the last point, all SAS solutions and tools are self-documenting white boxes, i.e. they provide transparency into the modeling process, options elected, assumptions made and results obtained -- all in an intuitive and thoroughly documented computing environment.
Time is running short and regulatory expectations are high. You may have noticed that in the US, Basel III and Dodd-Frank rollouts have picked up steam, and Tuesday's Senate approval of Richard Cordray to direct the Consumer Financial Protection Bureau (CFPB) will certainly spur that agency's bank oversight program.
What is your institution's strategy on the MRM front? More specifically:
- Have you wrestled down the definition of a model? (Does Internal Audit agree?!)
- Do you have a complete inventory of your models to show your regulator?
- Can you quantify the exposure that each model represents?
- How confident are you that you have sufficient controls in place to manage the risks?
- Have you established, and has your board approved, an MRM framework?
- In the aggregate, how much of your institution's capital could be wiped out due to bad/misused models?
Responsible development and use of models requires knowing the risks they pose in addition to the rewards they offer. Modeling success rests on the quality of the data used to build and to run them, the assumptions they rely on, the reliability of the process used to deploy them, the appropriateness of the way in which they are used, and the controls used to monitor their performance. MRM encompasses a lot of moving parts!
[My thanks to Naeem Siddiqi for his thought leadership emphasizing the critical need for scorecard developers, users and validators to constantly keep in mind the business considerations that come into play all along the model life-cycle and model value chain. Failure to consider the full business context in model development and usage is a huge contributor to model risk. Business models are solutions to business problems that often try to predict human or market behavior as a critical component. Business models are not math or stat solutions to laboratory experiments that can be nearly perfectly controlled and measured! If you deal with business models and have not done so already, I encourage you to pick up a copy of his book entitled Credit Risk Scorecards -- Developing and Implementing Intelligent Credit Scoring. It provides a step-by-step guide that can, and should, be generalized for any modeling exercise. Naeem also teaches a course on the same subject that is offered through SAS Education that is definitely worth the investment of two days to learn how to better manage model development and usage in order to achieve the business objectives they were designed to deliver.]