Will your car decide to kill you?

0

Download a SAS Best Practices eBook: The Machine Learning Primer

A self-driving car made headlines last summer when its human ‘driver’ was killed in an accident. Both car and driver had failed to spot a lorry across the road ahead, because the whiteness of the truck was too similar to the sky.

A mistake, by both driver and car, but with fatal consequences. But what about if the human driver had seen the lorry, but could only avoid it by swerving into another car, or a pedestrian? Would it then have been ‘right’ for the car to override the driver and allow the crash into the lorry, potentially saving other ‘innocent’ lives?

I did a survey on these kind of questions at Moral Machine at MIT, and it definitely got me thinking. These are real issues in the design of algorithms. And as algorithms become more ubiquitous in our lives, these questions will become more and more important.

Difficult questions

The ‘trolley car’ dilemma (would you push the fat man off the bridge to stop the runaway trolley car, and save three other people?) has been around for a while. Nor is it by any means the only ethical dilemma out there. They boil down to a question of what is right. But, of course, what I consider to be right may not match with your views.

The trolley car dilemma is interesting because it balances one death against three, but murder against ‘standing by’. It asks whether you would take a deliberate action to kill one person to save others, or stand by and watch others being killed. The real challenge, of course, is that ethical dilemmas are not that clear cut in real life. Perhaps by jumping off the bridge yourself, you could stop the car? Or maybe you could shout, and warn the other people?

The ethics of algorithm design is not a new topic. Isaac Asimov predicted the debate in the 1940s, creating the ‘Three Rules’ of robotics. The idea of ‘artificial psychology’ was formally put forward in the early 1960s. Until recently, however, it was very much a theoretical issue. But artificial intelligence is now developing at such speed that it is possible to see a point at which it will become more than theoretical. Within our lifetimes, perhaps even sooner, a car could be designed that can predict likely outcomes of accidents, and take action to reduce the loss of life as much as possible.

Research shows that people generally favour outcomes that reduce the loss of life. They also agree that this might mean sacrificing the ‘driver’ of the self-driving car—but only until they are asked whether they would buy that car. In other words, almost nobody would be prepared to buy a car programmed to make a decision that would kill them if that would save other lives.

The role of society

So much for personal choice. Perhaps we are all fundamentally selfish when it comes down to it. But what about societal choice? Iyad Rahwan at MIT has coined the term ‘Society in the Loop’ artificial intelligence, or SITL AI. This, he suggests, would enable algorithms to be programmed based on a societal contract: the greatest good of the greatest number, perhaps, or Google’s simpler approach of ‘do no harm’.

In other words, society could decide that cars are dangerous machines, and should all be programmed to avoid pedestrians at all costs, even if that means driving head-first into a brick wall, and killing the driver. People would have no option about choosing a car that did not kill them, because all cars would have similar programming. The trade-off for drivers would then become the benefit of riding a comfortable car, and getting to your destination quicker, against the knowledge that the car was programmed to ‘sacrifice’ you if necessary.

This, again, sounds simple. But I think that perhaps the biggest challenge would be to define a societal ethical framework around which everyone could unite. History is not encouraging. For example, even if everyone agrees that killing is bad, there are shades of grey. How would we define ‘the greatest good’? What if the driver were an eminent scientist working on a vaccine programme that would provide huge benefits to humans as a whole, and the pedestrian was a convict on the run?

The longer view

At the moment, the answer is simple: the car hands back control to the human driver at the crucial moment. My question is, will this ever change? I’d love to hear what you think.

Download a SAS Best Practices eBook: The Machine Learning Primer
Tags Algorithm
Share

About Author

Håkan Carlund

Having lead an organization through a digitalization/analytics journey in my earlier roles, I today see companies taking their first steps choosing different ways. As Analytics Advisor at SAS, I engage in discussions finding the more effective ways. The big changes remain and the great values are often yet to be created. "How?" is the common question. I am passionate in that discussion. Beside this, I spend as much time as possible sailing!

Leave A Reply

Back to Top