How to address bias in AI?

0

Artificial intelligence is a powerful force for good and an extraordinary competitive advantage. But, as Peter Parker’s uncle Ben nicely put it, "With great power comes great responsibility.” Or as my friend the Swedish character Pippi Longstocking says, “If you are very strong you must also be very kind!” They are both onto something important: It is important to use powerful techniques responsibly.

Bias in data leads to bias in output

All humans are biased by nature. By bias, I simply mean our prejudices and preconceived opinions that we use, for instance, when we make decisions based on our gut feeling. 

In an AI setting, it is important to be aware of this since our biases are easily transferred to the AI applications we develop. Most AI applications today are based on machine learning algorithms that, by definition, learn from the data they are fed. This training data is often from an earlier process where humans annotated the output.

When it comes to AI, the student is indeed only as good as the teacher, meaning that AI tools reflect the biases of their developers. If discriminating training data is feeding the machine learning algorithm, then the output will be discriminating as well. When training and developing AI applications, we need to see ourselves as teachers, the AI applications as our students and the data as their schoolbook. It is our responsibility as teachers to make sure that they are taught the correct things. 

Biases transmitted to the algorithm are just the beginning of the potential risk. The bias will also most likely be greatly amplified. An AI application can make thousands of decisions every minute with the technology we have today, using enormous computational resources and distributed calculations. This means that if something goes wrong, it could quickly go very wrong.

The importance of diversity in data science

A key component to make sure that we develop responsible AI is diversity. If we develop the application based on data related only to white, middle-aged men, for instance, then we are automatically tuning the application to work better for this group. 

We often hear about gender imbalance, and indeed, we do have a huge gender imbalance in AI today. Just like in many other areas. We need to consider more aspects of diversity, though. Gender is certainly not a binary variable. Neither women nor men form homogenous groups. We are all different. We need to consider diversity also in terms of age, ethnicity, skills, knowledge and background, for example.

It is important to create a diverse, inclusive environment where AI is designed and deployed. A diverse team will see things from many different points of view and help to reflect many different perspectives in the data. They will also be more likely to detect bias both in the data and in the output itself. Basically, because they will have different natural focuses.  

Lack of diversity creates issues for the end user

Lack of diversity in data and development can end up in issues for the end user. For instance, there are multiple examples where voice recognition performed worse for women than men and where facial recognition performed worse for women, especially Black women.

I read recently about an Irish woman who failed a spoken English test while trying to immigrate to Australia. This woman was a highly educated native speaker of English. But she failed the test because the system was just not trained on her accent. 

To understand this, imagine that you work in an HR department and want to hire an engineer using an AI recruitment tool. We all know that it is in our human nature to judge a person based on how he or she looks or appears. And we always train machine learning algorithms on historical data. And historically, men held most of these positions. Therefore, if you do not balance your data and carefully consider your variables, it is likely that the tool prefers male candidates over female candidates as successful applicants. 

Fixing the Amazon recruitment tool

A good example of how tricky it can be is the Amazon AI recruitment tool. The company retired this AI tool awhile ago because of its bias against women. For Amazon, it turned out that it was not a trivial thing to fix. Amazon removed the gender variable, of course, but that did not solve the problem. 

The AI still used variables that you would not consider gender-specific in the first place to predict gender, which also caused a problem. The algorithm could also pick up a pattern in the unstructured data, like the résumé, where specific vocabulary is used more frequently by men. In the Amazon case, men used verbs like execute more frequently. The algorithm only did its job – it identified candidates that matched the historical data about who the company preferred. The problem was in the training data. 

What does it take to deploy AI technology responsibly? How can we strike a balance between data-driven innovation and responsible AI requirements? Josefin Rosen and Olivier Penel will deep dive into these topics at Gartner Data & Analytics Summit. Register here

Share

About Author

Josefin Rosén

Principal Advisor Analytics

Curious analytics expert with a passion for unlocking hidden insights from all kinds of data. On a daily basis I help organizations from diverse industries and fields creating value from their big data and drive strategic business through advanced analytics.

Leave A Reply

Back to Top