Richard David Precht is probably writing a book on Artificial Intelligence as we speak - and it would not be a bad idea if people were able to find out about this subject without having to be specialists themselves.
It became clear already in 2017 that AI will affect us all. Whether you believe, like Stephen Hawking, that it is likely to be humanity's last invention, and the one that abolishes us, or that it will dominate the world, as Vladimir Putin has said, there is no question that things are changing, and we should all be trying to find out more. I spoke to Andreas Gödde at SAS, responsible for presales in the DACH region.
Andreas, why is everyone so worried about AI?
Whenever people are confronted with big changes, they are always going to be worried. Something known is being replaced by something unknown. The way it is being reported is also designed to scare. The term “artificial intelligence” already suggests that a property previously attributed to people — intelligence — can now also be provided by a machine. However, there are already a number of examples of where and how AI has invaded our everyday lives..@AndiGoedde, why are people worried about #AI? #MachineLearning Click To Tweet
Could you give us some of these examples?
Well, there is one example that is fairly current: the new fares that Lufthansa wants to introduce in Germany for domestic flights. The company is claiming that these fare changes are not due to a changed pricing policy, but to the underlying algorithms that calculate the prices and distribute them automatically. Of course, these are very clever algorithms, but ultimately the company remains responsible. It is just more convenient for them to hide behind the algorithms.
Or we could talk about the challenges of getting credit for purchases. A few years ago, your local bank manager would have decided if you were creditworthy. Today, those same managers can only shrug their shoulders and explain that they do not have the power to decide on a loan, because it is decided by a computer at headquarters.
We could also cite the situation with payment options in e-commerce. If you live, say, in South Central, Los Angeles, you are much more likely to be required to prepay before delivery. In Beverly Hills, however, you will probably be able to get delivery on account. This is down to an algorithm working out what is safest for the company.
Why do consumers have trouble accepting these cases?
In my opinion, there are usually three main reasons: The results or decisions are not fully explained, the conclusion is not transparent and comprehensible, and there is a sense of being ‘done’ in some way, because of an asymmetry between supplier and customer. These are not entirely new insights and all three points are already addressed in many ways today, at least in the banking area. The question is whether they are being addressed enough.
Please will you explain that in more detail?
Of course. Banks are now one of the most regulated economic sectors. Consumers have been given extensive rights to information. There are standardized information sheets, advisory requirements, documentation of consultations – if you want to know more, just ask a bank adviser how the job has changed in the last few years. However, this still does not seem to be enough to ensure that people really understand what is going on.
The second issue is covered by banking regulators. They monitor the use of analytical results, for example in the area of risk, and the analytical methods used have to meet certain criteria. The keyword here is “Model Risk Management”.
There is still some catching up to do on the third point, on adequate and equal communication. Using technical language does not make it easy for people to understand why exactly what happened. It is, of course, hard to explain how a machine learning algorithm or a neural network work! But it is important to try, because this will build trust.
That is the situation with banks. Do we need a regulator for all industries that use analytics?
I think we should at least think about it — and whether it needs to be a government agency. It would be helpful for everyone if there were trustworthy explanations. If, for example, product and service certification could show that the algorithms used operate to ethical standards, or that characteristics such as ethnic origin, gender or religious affiliation are not used in a discriminatory way.
Initiatives such as “Fairness, Accountability, and Transparency in Machine Learning” or Algorithm Watch go in this direction. Both these are about creating standards and opening the discussion about the fact that we would have a serious problem if we just have purely technological discussions about this. In our analytics economy, where so much already depends on algorithms, we cannot afford that.
What can we as software manufacturers do?
I think there are three main areas where we can action:
- Interpretability. Our software has to work in such a way that results can be easily presented and understood by departments and decision makers, not just by specialists.
- Traceability. We must ensure that the complex processes in data science — from data processing through modeling to deployment in production — can be documented in a way that is understood easily.
- Communication. By promoting basic analytical knowledge and making our software attractive to different application groups, it is much easier to talk about it. Data scientists have a responsibility to actively discuss the opportunities and risks of their work.
Why are people worried about Artificial Intelligence?
We hosted a digital panel discussion on Twitter covering this theme. Read the #saschat summary here: Why are people worried about Artificial Intelligence?
This interview was originally published in German on the SAS blog Mehr Wissen.