Volkswagen: AI and ethics a question of hardware AI#37

0

We are used to looking for the connection between artificial intelligence and ethics in the algorithms, which makes perfect sense. Like a Pavlovian dog, we are conditioned to do so. But is that sufficient? It is only through fast hardware and the increasing availability of data that we have come to use AI methods at all. And so we have inevitably slipped into the ethical discussion around scenarios that were only theoretically possible before. Like the astrophysicist who has only theoretical physics to prove constellations.

Patrick van der Smagt, Director of the Volkswagen Group's Machine Learning Lab
Patrick van der Smagt, Director of the Volkswagen Group's Machine Learning Research Lab

Here, on the terrain of AI since the early 2000s, we have received the space capsule in the form of hardware capabilities that the astrophysicist still lacks to get a picture of black holes on the spot. Thus, what the AI expert has been able to do theoretically with AI methods for almost 30 years already may seem dramatic and new to the unknowing. Patrick van der Smagt, Director of the Volkswagen Group's Machine Learning Research Lab, like everyone in his guild, doesn't see any significant difference from the theoretical possibilities in the 1990s. "We now have performant hardware that allows the algorithms to live in the first place, which also puts the ethical questions on the agenda. Without such hardware, the algorithms don't grow beyond game-like applications."

Practice shows that ethics starts when software possibilities are expressed in hardware. Examples from Volkswagen's Machine Learning Research Lab include self-learning drones and navigation based on camera data. But also neural networks playing a live drum jam session. It's a broad field of research that the Volkswagen Group is supporting with an internal interdisciplinary AI ethics council. "Bringing together different disciplines and their respective perspectives on AI is very important," says van der Smagt.

Formula E: VW and Audi research together

And how is VW combining hardware with software right now? In other words, what is the group currently researching in its Machine Learning Research Lab? "A classic machine learning topic, for example, is the modelling of battery behaviour," says van der Smagt. The goal here, he says, is to better understand and predict the ageing process of batteries. The processes are very complex, he said, because the information comes only from data. "Our methods can model and predict such processes from data."

Another research topic is predicting driver behaviour. To this end, the lab's experts teamed up with their colleagues at Audi. The focus was on the question of how a Formula E driver has to accelerate and brake so that the battery still has sufficient remaining capacity at the end. The battery virtually determines the ride. The insights gained can also be applied to daily life.

Mere expressions of will are not enough

Besides, navigation in unknown environments is also an area of research at Volkswagen. "Vehicles continuously observe their surroundings via their camera sensors. However, they don't always have the corresponding map," van der Smagt explains. Volkswagen has developed methods to create an environment map from camera images, he says. The algorithm can be used in parking garages, for example, when looking for a parking space.

But is it even possible to develop such algorithms according to ethical principles? "Yes, that is certainly possible. However, mere expressions of will are not enough. That's why we developed the etami initiative, which places a special focus on trust-based AI. The goal is to derive sustainable benefits for society from AI."

For him, ethics and AI is a question of knowledge: "We have to get to the point where the compatibility of AI and ethics becomes provable." Volkswagen has initiated etami (ethical and trustworthy artificial and machine intelligence) with other representatives from industry and research for this purpose. Companies such as Siemens, ABB and Deutsche Bahn are also involved. "We want to take a pioneering role and be a role model in the sustainable use of AI."

Ethics monitoring

Is there a need for ethics monitoring? "I wouldn't call it monitoring, but it is important to remain conscious and comprehensible when using data and drawing conclusions from it. You have to understand exactly the technology behind it to be able to assess what the intent behind a decision was. That's the linchpin."

But not everyone harbours the best intentions, Mr. van der Smagt. "From my point of view, there needs to be a set of rules with self-enforcing verification." And that's why Professor van der Smagt is also engaging with industry to align ethics and AI-based application.

The game of hardware and software in the midst of which AI can live is infinitely varied and diverse. As the hardware gets better, so will the AI application. Still (and so it will remain for the next decades), the horror scenarios are assessable. Biotechnology may well have already brought more threatening scenarios into application than AI methods.

Patrick van der Smagt received his PhD from the University of Amsterdam on neural networks in robotics. He is the Director of Volkswagen Group's Machine Learning Research Lab, which focuses on probabilistic neural networks, time series modeling and optimal control. His awards include the 2013 Erwin Schrödinger Prize of the Helmholtz Association, the 2014 King Sun Fu Memorial Prize, and the 2013 Harvard Medical School/MGH Martin Research Prize.

Tags AI Series
Share

About Author

Andrea Deinert

Journalist // Blogger for AI and Data Science // Data Science Community Liaison // Academic Liaison || Portraits opinion leaders from politics, society and research to reveal the meaning of AI and ethics for future society.

Leave A Reply

Back to Top