Interpretability, traceability and clarity - the other AI mandate


Richard David Precht is probably writing a book on Artificial Intelligence as we speak - and it would not be a bad idea if people were able to find out about this subject without having to be specialists themselves.

Worries around AI

Andreas Gödde

It became clear already in 2017 that AI will affect us all. Whether you believe, like Stephen Hawking, that it is likely to be humanity's last invention, and the one that abolishes us, or that it will dominate the world, as Vladimir Putin has said, there is no question that things are changing, and we should all be trying to find out more. I spoke to Andreas Gödde at SAS, responsible for presales in the DACH region.

Andreas, why is everyone so worried about AI?

Whenever people are confronted with big changes, they are always going to be worried. Something known is being replaced by something unknown. The way it is being reported is also designed to scare. The term “artificial intelligence” already suggests that a property previously attributed to people — intelligence — can now also be provided by a machine. However, there are already a number of examples of where and how AI has invaded our everyday lives.

.@AndiGoedde, why are people worried about #AI? #MachineLearning Click To Tweet

Could you give us some of these examples?

Worries around AIWell, there is one example that is fairly current: the new fares that Lufthansa wants to introduce in Germany for domestic flights. The company is claiming that these fare changes are not due to a changed pricing policy, but to the underlying algorithms that calculate the prices and distribute them automatically. Of course, these are very clever algorithms, but ultimately the company remains responsible. It is just more convenient for them to hide behind the algorithms.

Or we could talk about the challenges of getting credit for purchases. A few years ago, your local bank manager would have decided if you were creditworthy. Today, those same managers can only shrug their shoulders and explain that they do not have the power to decide on a loan, because it is decided by a computer at headquarters.

We could also cite the situation with payment options in e-commerce. If you live, say, in South Central, Los Angeles, you are much more likely to be required to prepay before delivery. In Beverly Hills, however, you will probably be able to get delivery on account. This is down to an algorithm working out what is safest for the company.

Why do consumers have trouble accepting these cases?

In my opinion, there are usually three main reasons: The results or decisions are not fully explained, the conclusion is not transparent and comprehensible, and there is a sense of being ‘done’ in some way, because of an asymmetry between supplier and customer. These are not entirely new insights and all three points are already addressed in many ways today, at least in the banking area. The question is whether they are being addressed enough.

Please will you explain that in more detail?

Of course. Banks are now one of the most regulated economic sectors. Consumers have been given extensive rights to information. There are standardized information sheets, advisory requirements, documentation of consultations – if you want to know more, just ask a bank adviser how the job has changed in the last few years. However, this still does not seem to be enough to ensure that people really understand what is going on.

The second issue is covered by banking regulators. They monitor the use of analytical results, for example in the area of ​​risk, and the analytical methods used have to meet certain criteria. The keyword here is “Model Risk Management”.

There is still some catching up to do on the third point, on adequate and equal communication. Using technical language does not make it easy for people to understand why exactly what happened. It is, of course, hard to explain how a machine learning algorithm or a neural network work! But it is important to try, because this will build trust.

That is the situation with banks. Do we need a regulator for all industries that use analytics?

I think we should at least think about it — and whether it needs to be a government agency. It would be helpful for everyone if there were trustworthy explanations. If, for example, product and service certification could show that the algorithms used operate to ethical standards, or that characteristics such as ethnic origin, gender or religious affiliation are not used in a discriminatory way.

Initiatives such as “Fairness, Accountability, and Transparency in Machine Learning” or Algorithm Watch go in this direction. Both these are about creating standards and opening the discussion about the fact that we would have a serious problem if we just have purely technological discussions about this. In our analytics economy, where so much already depends on algorithms, we cannot afford that.

What can we as software manufacturers do?

I think there are three main areas where we can action:

  1. Interpretability. Our software has to work in such a way that results can be easily presented and understood by departments and decision makers, not just by specialists.
  2. Traceability. We must ensure that the complex processes in data science — from data processing through modeling to deployment in production — can be documented in a way that is understood easily.
  3. Communication. By promoting basic analytical knowledge and making our software attractive to different application groups, it is much easier to talk about it. Data scientists have a responsibility to actively discuss the opportunities and risks of their work.

What do you think? What is important to you in regards of artificial intelligence?

Fairness, accountability and trust in AI is something we take seriously. And we believe dialogue is the best way to progress. Will you join us on our next discussion? We will be on Twitter, at the #saschat tag, on Friday, 23rd March, kicking off at 4pm CET, 3pm UK, 11am ET. Here are some of the questions we will be using to drive the discussion:

  1. What are the similarities and differences between AI and other technologies in relation to adoption fear?
  2. How could AI be regulated?
  3. What is the responsibility of companies using AI?
  4. What could software vendors do?
  5. How can education support more responsible AI deployment?


This interview was originally published in German on the SAS blog Mehr Wissen.


About Author

Thomas Keil

Senior Manager Marketing

Dr. Thomas Keil is a specialist for Big Data, Information Management and Business Analytics. Besides his work as Field Marketing Manager at SAS in Germany, Austria and Switzerland he served as a board member of the working group Big Data BITKOM and regulary is invited to Big Data workshops. Dr. Thomas Keil 2011 came to SAS. Previously, he worked for eight years for the software vendor zetVisions, most recently as Head of Marketing and Head of Channel Sales. Dr. Thomas Keil ist Spezialist für Big Data, Information Management und Business Analytics. Neben seiner Tätigkeit als Field Marketing Verantwortlicher bei SAS in Deutschland, Österreich und der Schweiz engagierte er sich als Vorstandsmitglied im Arbeitskreis Big Data des BITKOM. Er vertritt seine Themen sowohl im TDWI als auch im Rahmen der KSFE und ist regelmäßig als Sprecher bei Big Data Workshops und Veranstaltungen zur Digitalisierung. Dr. Thomas Keil kam 2011 zu SAS. Davor war er acht Jahre für den Softwarehersteller zetVisions tätig, zuletzt als Head of Marketing sowie Head of Channel Sales.

Related Posts

Leave A Reply

Back to Top