As artificial intelligence (AI) becomes more ubiquitous, organisations are starting to encounter a big issue: explainability. European law requires that organisations need to be able to explain decisions about individuals, such as whether to grant a loan, extend a line of credit, or even to start a fraud investigation.

This is straightforward when the decisions are being made by people following a set of rules. They can pinpoint the precise reason for the outcome. It is also relatively straightforward when you are using algorithms that follow rules: again, you can easily identify the sticking point. However, it becomes a lot more difficult with AI. Machine learning algorithms learn from data, and then apply what they have learned to make decisions. However, it is not always clear exactly what they have learned and how they have learned —and whether it functions as intended.

The root of the problem

This can lead to some serious problems. However, even simpler analytical models can also have problems. Andreas Vermeulen is Director of Technology and Head of their Analytics and Automation and Digital Consultancy Services at consultancy company Sopra Steria. He describes working with one financial services organisation.

“We did some work with one of the banks about their loan application scoring algorithm. It turned out that it was biased. They had been using it for 12 years and had no idea. It was hard for them to accept at first, until we explained to them in very simple terms what the model was actually doing. Then they realised immediately that this wasn’t what they had intended.”

He also cites another similar case.

“We were looking at some fraud systems, and we explained what the system was checking to business experts who understood fraud. They said immediately, ‘No, that’s not what we want’, and they too had been running this model for some years. They’d never really looked at what the model was doing. However, as soon as we sat down with them and explained it in simple terms, they saw it immediately. They also realised why many of their fraud investigations had been unsuccessful: because they were looking for the wrong thing.”

It is, therefore, crucial to validate your models on a regular basis, and check that they are actually doing what you want. Paul Jones, Head of Technology UK and Ireland, SAS, suggests that the last 18 months have highlighted the importance of reviewing models regularly, because the situation changed so rapidly.

“During COVID, a lot of the predictive models stopped functioning normally across all sectors. I saw a very good example in fraud management, working with one of the big banks. The patterns of fraud all changed, and there was a big growth in push payment fraud. It upset all the models, and suddenly they weren't working.”

Making things simple

Andreas’ examples also highlight the importance of being able to explain what your models are doing in simple terms. Paul Jones agrees that this is one of the key skills in analytics.

“If we want to make analytics truly democratised, we have to be able to explain it simply, and in such a way that anyone can understand it, not just data scientists. We’re talking about ‘the man on the street’ here. If we can’t do that, AI and more advanced analytics will not be accepted.”

Andreas Vermuelen agrees with this assessment.

“We have found that many of our clients are reluctant to accept deep learning algorithms. Typically, they say, ‘Well, I think this will probably do the job, but I don’t really understand it, so I want something simpler’. This has highlighted a real issue for us about explainability. We’ve started doing a lot of work with our data scientists to help them become storytellers. We want them to be able to take what they’ve done and explain it. We typically tell them to imagine they’re explaining it to children. If you can explain it at that level, there is a good chance the client will understand it. And if the client understands it, they are more likely to use it, and be happy with it—and also be able to see whether the model is still working in future.”

Telling the story

Both Andreas and Paul agree that this ability to tell the story of your data—and your algorithm—is now an essential skill in data science. Andreas puts it simply.

“You have to be able to explain whatever you're doing. You do the math's, you need to explain it.”

 

This interview is part of a recent interview study by SAS on how the pandemic has accelerated digitalization: Catch more conclusions from the study on post pandemic transformation

Tags Data
Share

About Author

Colin Gray

Colin started his career training to be an actuary and holds a Certificate of Actuarial Techniques. Since moving to SAS, he has concentrated on the detection and prevention of fraud through the use of Analytics.

Leave A Reply

Back to Top