Algorithm audit is a growing area as companies increasingly rely on machine learning algorithms to make decisions. But what do business managers need to know about these audits?
We have a responsibility to audit our algorithms
Artificial intelligence (AI) algorithms have an increasingly central role in decision making in our society today. The algorithms learn patterns and features from the data it is fed with. Their intention is to increase efficiency and to overcome errors and biases that come with manual human decisions. However, since those algorithms is dependent on a training data set generated by a human team, they could potentially amplify bias and discrimination, instead of correcting for it. AI is only as good as the data that powers it. If biased data is fed into an algorithm - discriminatory results will follow.
The imminent arrival of the EU’s General Data Protection Regulation (GDPR) will place a duty on companies to be able to explain the results of algorithmic decisions to customers. Regular algorithm audit is necessary to ensure that this is possible, and that algorithms continue to perform as planned.
Whether or not your organization holds data about EU citizens — and therefore GDPR applies — it is arguable that you have an ethical responsibility to audit your algorithms. AI systems are subject to the biases of their programmers, whether intentional or not. Only regular independent audit can overcome this type of systemic bias. We do have a responsibility to understand the data and the techniques we apply so that we can make sure we are not amplifying human bias. We need to check for integrity, by taking care of biases in the data used to build the algorithms, and we need to look in to how we define success so that the algorithms support non-biased goals. Algorithms are much more efficient than us, they can make a million decisions per minute, and if they are biased they will very soon do a lot of damage.
Audit is a relatively simple principle, but may be harder in practice
The idea behind audit, including algorithm audit, is relatively simple: to examine the inputs, outputs and outcomes in a relatively scientific way. The purpose is to ensure that they are consistent with each other, and with the intention behind the system. In practice, of course, this may be harder to achieve. Some algorithms are relatively easy to audit, like for instance algorithms that use decision trees or logistic regression techniques. Their weights and input variables can easily be observed. and auditing this type of algorithms will include examining the data flows, reviewing assumptions and model weights, where possible, and checking outcomes.
Many machine learning algorithms, like for instance deep neural networks are black-boxes by nature, and much harder to audit. It is extremely difficult to look inside those algorithms to establish why a certain result was returned. To address this issue, researchers are exploring ways to make these systems give some approximation of their workings to engineers and end users, and business managers may need to decide whether the usefulness of the algorithm outweighs the concerns about explaining its decisions when necessary.
Sign up to this webinar series and learn about the practicalities of artificial intelligence and making AI business-smart
You may be able to ‘set a thief to catch a thief’
One possible option is to use algorithms to audit algorithms. This has been done, for example, to audit bail decisions in criminal justice cases. This works by getting another group of programmers to build a model to address the same problem. The issue, of course, is that the audit algorithm may have similar biases built in, so you may come to the same (incorrect) conclusion as the original one.
A new breed of professionals may emerge to provide algorithm audit
As AI systems become more sophisticated, new types of jobs will most certainly be required. An Accenture survey recently proposed three new likely categories of human jobs — Trainers, Explainers, and Sustainers. Trainers teach AI systems how to perform, process data and behave. Explainers improves the transparency of the AI inner workings and the Sustainers - they will do the audits, to ensure that AI is fair, safe and responsible.
Auditors would examine data sources, the choice of analytical tools, and the way in which results are interpreted, to ensure that as far as possible, built-in biases were eliminated. They would probably need to be subject to professional regulation, to guarantee their impartiality. Even an algorithm audit may not be enough.
Audits, like algorithms themselves, can have a ‘black box’ quality about them. If people don’t want to believe the outcomes, they won’t, however hard you try to persuade them that you have used all the correct procedures: Witness the audit of the recent Kenyan elections.
Perhaps the best approach is a human–algorithm partnership
There are no easy answers to audit of algorithms. But then, there are no easy to answers to audit of criminal justice cases either, and we have come up with an appeal system that, by and large, is agreed to work. It seems possible that for the moment at least, use of algorithms in partnership with a diverse team of human auditors may be the best option: Speed coupled with sense. While AI can sort through enormous volumes of data to find problems quickly, it takes a human to understand them. The auditing team of the future will likely be a mix of machines and people working side by side.
Download a IIA paper: Machine Humanity: How the Machine Learning of Today is Driving the Artificial Intelligence of Tomorrow