Trustworthy AI: What it is and why we need it

1

In your hotel room, you said “Hey Genie, play pop music," and a smart device played the music you wanted.  When entering the conference center in Daegu, every visitor was automatically checked for fever. You just watched a display on a table, a box framed your face – and immediately showed your temperature. The many pictures you took from the conference were sorted by an algorithm, clustering pictures of the same people in one dedicated folder.

All three examples show how deeply integrated artificial intelligence technologies already are in our lives. Should we worry about biases or “bad quality” in these applications? The smart speaker played music from a band called “Chess” when I asked to play “jazz.” This might be funny – but think of automated decision systems based on AI that decide whether to give a loan, provide health care services or offer a job, all of which are real-world applications already.

Responsible AI – what it is, and why we need it
Trustworthy AI – what it is, and why we need it

Our future

AI technology will shape our future even more – so we need to be able to trust it. Would you enter an autonomous driving car when it operates totally without your command and control? Up to what speed? Would you trust a group of graduates pasting code snippets they Googled just the minute before into their application?

Would you trust an algorithm to decide whether to curate a potential cancer lesion on your liver? Or would you prefer to have a final review by a human expert. And can you generally trust the so-called models built on training data from the past? The simple answer: You can´t. There is no such thing as unbiased data.

An example

In September 2020, a data scientist ran an experiment with the Twitter algorithm to preview bigger pictures. As you probably know, Twitter aims to focus on the most relevant part of the picture. For instance: A photograph of a protester holding a poster with a slogan will probably focus on the slogan in the preview.

This experiment was different: If you forced Twitter to decide between a famous white politician (in this case: Mitch McConnell, at that time the US Senate Majority Leader) and the most prominent Black politician (Barack Obama), it turns out that the definition of “most relevant” was not determined by the color of the tie or something else. It was determined by the color of the skin! Despite testing the algorithm beforehand, Twitter had to admit that it did not pay enough attention to racial bias. In the training data set, labeled for relevancy, there were obviously much more “relevant” (i.e., important, well-known, often pictured) white men – and not so many black men.

The bias of the training data was such that not even a former US president could compete with this relevancy built up by centuries of racial discrimination. A similar effect exists with gender equality: If the grammatical gender does not reflect the gender of the described persons, the Google translate algorithm tends to translate stereotypes. In the Turkish language, there is no grammatical distinction between male and female gender. So the sentence “o bir doctor” could be translated into “He is a doctor” OR “She is a doctor.” Both versions are correct – but guess what Google translate suggested before they changed this?

Acceptance

So we have to accept there is a bias in the data, and therefore AI-based decisions need to have as much transparency as possible – and regulation when necessary. So modern machine learning frameworks include technical hints about the driving factor of a decision, e.g., pointing out that the relevancy tag was placed because of the skin. Then it is still biased but can be reworked and improved much more easily.

The regulation part is lifting off on all levels: global, geo, country- or domain-specific. It might be useless to certify the voice-command algorithm to be equal. But when it comes to more serious decisions, it would be hugely beneficial if trustworthy authorities would closely examine the algorithm. The big tech companies fear losing degrees of freedom by regulation. On the other hand, a fixed and defined framework governs a market and helps shape it by improving trust in AI.

Tech experts can't be the only ones discussing things that affect our everyday life so much already. We as a society should actively involve a diverse, non-technical audience in the discussion on the proper usage of AI.

Read more in the Primer E-Book on AI & Ethics.

Share

About Author

Thomas Keil

Director Marketing

Dr. Thomas Keil is a specialist for the impact of technology on business models and on society in general. He covers topics like Digital transformation, Big Data, Artificial Intelligence & Ethics. Besides his work as Regional Marketing Director at SAS in Germany, Austria and Switzerland he regularly is invited to conferences, workshops and seminars. He serves as advisor to B2B marketing magazines and in program committees of AI-related conferences. Dr. Thomas Keil 2011 came to SAS. Previously, he worked for eight years for the software vendor zetVisions, most recently as Head of Marketing and Head of Channel Sales.

1 Comment

  1. great content thanks for sharing, another thing I mention here is about Legal reasons, in the case in which the algorithm takes a harmful decision there might be legal issues. For example, it can be quite costly for businesses that are caught not fulfilling the obligations of GDPR. The section related to XAI is Article 22 of the GDPR, often referred to as the Right to Explanation.

Leave A Reply

Back to Top