How do we practice trustworthy AI?

0

A key component to make sure that we develop responsible AI is diversity. This is because an AI application reflects and even amplifies the biases of its developers. As I discussed in my previous post, a diverse team will see things from many different points of view and help to reflect many different perspectives in the data. 

What can we do besides making sure our teams are diverse? Without claiming to have the complete answer, I would like to share some thoughts.

Look into the data!

Define the problem and your goals. For example, when building a recruitment tool, are we trying to find the best possible job candidates, or are we trying to find people like the ones we already have? Make sure that the training data is representative of the population to which you are going to apply the model. 

Remove sensitive variables and proxies. At SAS we have tools throughout the AI life cycle to help explore distributions and sensitive variables/proxy variables. 

Shine some light on the black box

Explain your models. To trust something and dare to take responsibility for something we need to understand how it works. Machine learning models are black-box models, and they are complex. A deep learning algorithm can be defined by millions of parameters. 

Model interpretability methods can meet different needs for different users, such as regulators, executives or data scientists. Regulators need model interpretability to make sure the model makes predictions for the right reasons. For example, if an individual’s loan application is rejected, the loan agency needs to confirm that this decision does not violate any laws that protect certain groups of people. Executives would need to understand the black-box models so that they can logically justify the decisions they make. Data scientists need model interpretability to be able to detect bias that exists in the training data, and of course to extract new knowledge from the data. But they also need to track bias and debug models when they produce wrong and unexpected predictions.

SAS Visual Data Mining and Machine Learning includes an interpretability toolkit to interpret models and individual model-based decisions in terms of the relative contribution of the variables that positively or negatively influenced the prediction. This is useful to expose bias both during model development and after deployment. 

Have a framework in place for monitoring and governance 

Monitor your models and their performance throughout the life cycle. Automation does not equal autonomy. Even though AI applications are self-learning systems, we cannot just leave them alone and expect them to behave. We need to continuously monitor the system over the whole life cycle, from data selection to the output or action that comes out on the other side, to make sure that the system is performing as intended.

The SAS Platform offers a governance framework that is vital to enable responsible use of AI, as well as to meet responsible AI principles such as transparency and traceability in terms of both data and models used in the decision process. 

Keep humans in the loop

Many decision makers put too much faith in AI applications. A human’s detailed review of the system – we could call him or her an AI auditor – is a good strategy to avoid risk. It is also important that users can provide meaningful feedback and override AI-based decisions – and, if necessary, even kill the process.  

Think about accountability

Take responsibility throughout the AI life cycle. If something still goes wrong despite heavy monitoring and auditing – what can we do then? Can we blame an algorithm? Of course not. When we invest in AI, we also have the responsibility to ensure that it is ethical and sustainable. We must ensure that someone is responsible for every step of the AI life cycle. From data to output. Before and after development, deployment and use.

Establish an AI policy

Many ethics guidelines have been released over the last years to guide us towards responsible AI. The one most relevant in the Nordics is probably the EU ethics guidelines, on which forthcoming regulations will be based. Another set of principles I really like is the ones from Future of Life. 

What does it take to deploy AI technology responsibly? How can we strike a balance between data-driven innovation and responsible AI requirements? Josefin Rosen and Olivier Penel will deep dive into these topic at Gartner Data & Analytics Summit. Register here

Share

About Author

Josefin Rosén

Principal Advisor Analytics

Curious analytics expert with a passion for unlocking hidden insights from all kinds of data. On a daily basis I help organizations from diverse industries and fields creating value from their big data and drive strategic business through advanced analytics.

Leave A Reply

Back to Top