Could the financial sector be doing more with their models if they borrowed innovation from elsewhere?

0

As the Basel Accords continue to drum up attention in the global financial markets, many institutions are looking at how they can strike a balance between capital requirements and competitive advantage. One area of focus is consumer credit risk modelling and scoring, as the more accurate and robust the models are, the lower the risk institutions face. While credit modelling has traditionally been based on linear models, it is becoming more apparent that non-linear techniques (e.g. Gradient Boosting, Neural Networks) can make significant improvements to the accuracy of default models and ultimately support an institution's bottom line.

Is it time for financial institutions to break free from using traditional linear models simply because “that is the way we have always done it” and accept the capabilities and advantages that more advanced predictive modelling techniques can bring?

Over the last few years (both during my PhD research and working in the financial sector), I have assessed and developed predictive modelling techniques which are applicable to estimating:

  • The probability of a customer going into default (PD).
  • The resultant loss experienced by the company given a customer defaults (LGD).
  • The exposure faced by an organisation at the point in time a customer defaults (EAD).

These three aspects make up Pillar 1 of the Basel Accord which prescribes financial institutions calculations for their minimum capital requirements (the minimum amount of capital they are regulated to hold) and are fundamental in determining how much institutions must hold and how much they can lend out to you and me in the form of personal loans, mortgages and other forms of credit.

Under the advanced internal ratings-based approach (AIRB) banks have the ability to provide their own internal estimations to the regulators for each of these three aspects: PD, LGD and EAD. As a rule of thumb linear regression models are used in the estimation of LGD and aspects of EAD, whereas logistic regression is used in the derivation of PD. A typical distribution for an LGD portfolio, for example, is bi-modal with two large point densities around 0 and 1, with a shallow distribution between the peaks (see figure). In practice it is common to apply a beta transformation to the target variable and then estimate this transformed value with a linear regression model. From the research conducted, however, a vast improvement in the estimation of LGD could be made with a two-stage approach where a neural network model is trained on the residuals of a linear regression model, therefore combining the the comprehensibility of a linear regression model with the added predictive power of a non-linear technique. (For a more detailed discussion of the issues related to implementing a two-stage approach for estimating LGD, please see Loterman, et al. 2011).

One of the major fallouts of the 2008 global financial crisis was that regulators clamped down on financial institutions to make sure that both they and the institutions themselves fully understand their internal risks and can fully prove without doubt they understand their underlying models. The problem with this requirement is that financial institutions have subsequently become more averse to adopting new ideas and ever more entrenched in the ways of the past. They have also spent a huge investment of time and resources catching up with and providing documentation to the regulators. I am all for the stringent controls of regulatory bodies but I believe there is still room for these financial institutions to think outside of the ‘white’ box and explore other approaches to model development.

There is merit to using linear regression techniques due to their clarity and ease of use, and more importantly advanced analytical techniques need to be fully understood before data is thrown into them. But with the right amount of knowledge and openness to try new ideas, financial institutions could potentially reap the benefits of applying novel analytical techniques (i.e. improved prediction rates, more accurate capital estimations).

The key would be for financial institutions to embrace the potential of using approaches novel to the financial sector that have been proven in a number of other sectors, such as healthcare, fraud detection and marketing (neural networks for credit card fraud detection, for example, have been used successfully in the detection of abrupt changes in established patterns and recognising typical usage patterns of fraud). This use of innovation for modelling their risk portfolios would also encourage these institutions to not fall behind other sectors in the use of novel analytical techniques, as well as challenge the regulators to show that advanced analytical techniques can in fact lead to better models and better estimations of risk.

For links to papers written by the author on the area of applying SAS based analytical modelling techniques in the financial sector please see as follows:

 

Share

About Author

Iain Brown

Head of Data Science SAS UK&I / Adjunct Professor of Marketing Analytics

Dr. Iain Brown (Twitter: @IainLJBrown) is the Head of Data Science at SAS and Adjunct Professor of Marketing Analytics at University of Southampton working across the Financial Services sector, providing thought leadership in Risk, AI and Machine Learning. Prior to joining SAS, Iain worked for one of the largest UK retail banks in the Risk department.

Comments are closed.

Back to Top