Fintechs are turning up the heat in retail and corporate banking. As smaller, more agile providers have entered the banking market, customers are getting used to a higher level of service – a personalised, digital experience that guides them to make quicker, smarter decisions about their finances. For traditional banks to compete, they need to transform the way they operate. On the retail banking side, that means digitising customer-facing services. No queuing in branches, no paperwork. And when customers apply for a credit card or loan, they get a decision in seconds.
Meanwhile, on the corporate side, the aim of transformation is often to enable an everything as a service (XaaS) strategy, building smart packaged offerings such as treasury as a service or risk management as a service, which the bank can both consume in-house and provide to enterprise clients.
Data-driven digital transformation
To foster this type of digital business transformation, banks need to redesign both internal and customer-facing processes to embed data-driven decision making. By integrating intelligent automation and decisioning capabilities into their operations, banks can eliminate paperwork and manual processing. This will greatly improve service levels to customers while keeping the cost-to-serve to a minimum.
The creation of these data-driven services depends on the ability to design, build, test and deploy processes that embed predictive models using both well-established statistical methods and new artificial intelligence and machine learning (AI/ML) techniques. The development life cycle for these models is inherently experimental. It’s vital to try different approaches, test the results, and iterate on the candidates that offer the greatest potential. To remain relevant in the digital age, organizations must deliver such experiments with agility and speed.
The obstacle of legacy infrastructure
The problem is that banks’ traditional IT architectures – built around legacy on-premises systems – are a uniquely bad environment for developing these models. Due to the experimental nature of the models, it’s very difficult to forecast what type of infrastructure banks will need for upcoming projects. For example, different machine learning algorithms run best on hardware that has been optimised for that category of model building. If you invest in a cluster of servers with a particular configuration of memory and processors, it may only be suitable for a small subset of the work you actually need to do. And every time you need to change your approach, you’ll face high fixed costs and a long lead time to get the right infrastructure in place.
Instead, you need an IT architecture that allows you to set up experiments quickly and manage them flexibly. When an idea doesn’t work out, you should have the ability to fail fast and cut your losses. And when an idea succeeds, you need to get it into production rapidly and roll it out for enterprise-scale deployment.
The promise of cloud-based analytics
The cloud is the perfect environment for these exploratory projects. It gives you the freedom to spin up almost any type of infrastructure within minutes, and either scale it or shut it down instantly depending on the results.
Cloud environments also free you from dependencies on departmental silos and the quirks of your internal network. They give you a green-field site where cross-functional teams can collaborate freely, enabling you to build models that combine domain knowledge from different areas of the bank and create opportunities for XaaS offerings that would never have been possible in the past.
While most of the major public cloud providers now offer a range of analytics-specific infrastructure services, they come at a price. Once your data and models live in a particular proprietary cloud repository, they can be difficult to get out again. You’re locked into their infrastructure for the foreseeable future.
Besides the commercial implications, this lock-in poses a major regulatory problem for banks. According to the latest consultation paper on outsourcing and third-party risk management from the Prudential Regulatory Authority (PRA), regulators expect banks to be able to port any outsourced services over to another provider or bring them back in house without any risk to business continuity.
The right tool for the job
I’ve had conversations about moving to the cloud with CIOs at banks of various sizes, and this issue of portability has been a recurring theme. They are looking for analytics solutions that work with any vendor and run on any cloud platform – or move between platforms – without significant disruption. In fact, since many banking use cases involve analysing data that is too sensitive to store outside the internal network, one of the most-requested offerings is a hybrid cloud/on-premises solution. Banks could then perform experimental projects with anonymised data in the cloud and then bring the successful models back into their own data centre for deployment in production.
Finally, while there’s a lot of buzz around AI/ML techniques, it’s important to recognise that they are not always the best option. Traditional statistical methods can be equally powerful, cost less to maintain, and can be easier to explain and audit – an increasingly important capability, as a recent legal case in the Netherlands demonstrates. My advice is always that banks should look for a single platform that gives equal support to both statistical and AI/ML modelling techniques and provides easy-to-use visualisations that make models easier to interpret. This allows your data scientists to pick the best tool for the job. And makes it easier for you to ensure the safe and responsible use of your data.
We’re working with a number of leading banks to power their digital transformation initiatives and build towards the XaaS future in the cloud. Find out more about what’s possible with cloud computing.