The ethics of machine intelligence

3

Brain_AIHow comfortable are you with hard decisions? If it affected you, how comfortable would you be with losing your agency and having someone else make the decision for you? What if that decision isn’t made by a person but a machine?

More than abstract questions, these are going to become as divisive a social issue as genetic manipulation. In many ways, they strike at the heart of what we believe makes us human.

Let’s play things forward a bit; sometime in the near future, a car’s travelling from Chicago to New York. It’s mid-afternoon, and somewhere among the highways and byways of the interstate, the driver sets the vehicle to assisted control. While she sleeps, the car shadows a semi-trailer, keeping a safe distance.

Cutting through Toledo, the truck blows a tire. As it tips and starts to roll, the assisted safety systems in the car activate. Continuing forward will almost certainly paralyse or kill the driver in a collision. Swerving left will save the driver but send the car into a crowd of people. Swerving right will take the car into a playground full of children.What’s the right thing to do?

Or, consider this. An autonomous agent has been sent into a burning building on behalf of an emergency response team, searching for survivors. Inside, it finds two people. One is a pregnant woman suffering severe and rapid blood loss. The other is her husband, trapped under a beam but otherwise largely uninjured. As the building collapses, there’s time to save only one. Who do you pick?

Over the last few weeks, AlphaGo, a deep learning engine, has successfully beaten a human player at Go. The significance of this can’t be overstated; in terms of agency, it’s the single biggest milestone towards true self-learning systems since Kasparov lost to Deep Blue. Many didn’t believe we’d get to this point for decades.

This revolution is happening in parallel with a robotic revolution. If you haven’t already, watch the demonstration video of Atlas, Boston Dynamics’ latest prototype. Try to watch their torturing poor Atlas and not feel an empathetic twinge. And then, try not to be slightly worried when it gets back up to continue on.

After over half a million miles, Google’s driverless cars have had their first crash. What’s most interesting about this isn’t that they’ve finally had an accident; it’s that such an early blend of autonomy and machine learning has gone for so long without having one.

How do we create ethical guidelines for machines?
As we give our autonomous agents agency over our virtual and also physical worlds, we need to give them the guidelines to make decisions over our health, safety, and livelihood. The worrying thing is that we need to work this out sooner than later; as we’re already seeing, we don’t need sentience to have created agency.

Agency brings responsibility. It’s not yet clear what responsibility means in an algorithmic sense. If an algorithm has the power of life and death over not just the operator but any number of countless others, what ethics must we impart into the system?

Asimov tried to manage this through his three laws of robotics. While it was a great literary device, hard rules simply don’t work. There are countless unintended consequences that are hidden in these apparently profound laws; deontological ethicism doesn’t help with any of our above problems. How does one define, “harm”? Is letting someone eat themselves to a slow death from diabetes murder, apathy, or respect for personal freedom and self-expression?

Alternatively, we could guide by comparing relative impacts. Maybe the life of an unborn child is worth more in the long run than a healthy adult male. Maybe not. Still, this is even harder. Given microseconds to make a decision, consequentialism is only possible if our systems have enough intelligence to understand the consequences of their actions. How long until we can make systems with this level of intelligence? And do we even want to?

There are no easy answers, and yet we’re hurtling towards a world where we’ve created machines that are more than capable of agency. As a society, our desire for technology, innovation, and “magic” is propelling us towards a world we don’t yet know how to manage.

We need to work out some way of solving this. In fact, looking at the ever accelerating rate of change we’ll be there well before we expected to be. It’s our role as creators to give our agents the framework to make these decisions. The irony, of course, is that what’s moral and just in a given context is subjective and personal. And, if ethical relativism is the reality, how can we ever hope to come up with a definitive moral compass for our creations?

As quickly as we invent new ways of embedding adaptive intelligence into the fabric of our world, we inevitably rush towards a deeper existential question: What world do we want to create?

For more on machine learning, check out the white paper: Machine Learning with SAS Enterprise Miner. The paper details how a team of SAS modelers created and determined a champion model to predict churn.

Share

About Author


3 Comments

  1. Those scenarios are challenging for human to answer and there is no definite answer depending on who is driving or making the decision.
    Therefore, a more conservative approach in applying machine intelligence might work better. Only leave machine to make decision in a limited scope, where a definite decision can be made or rule can be defined.
    For a truck driving from Chicago to New York, the assisted control will be locked from activation for certain area. The reason is that there is no enough "cushion" space(e.g. 30 meters) on at least
    one side of the road. Because if a tire explosion can lead to such a tough decision to agent(actually there is no option), then it shouldn't be activated at all.
    I'm also curious, even machine intelligence has developed so much, the car tire industry is almost the same for many many years. Maybe we can require all driverless cars to be twin tires to be safe...

    • Great article on an important topic that cries out for further exploration.

      It is saddening that the standard Human reaction to being presented with such difficult decisions is to try and refuse to answer or try to bypass it with a 'magical' third-option.
      I agree that we're headed towards a world that requires a solution to these type of problems - one that needs to be made excruciatingly explicit as part of actual deployed code - and we need to work out an approach.

      In such a spirit, I would propose that the answer both dilemmas presented above is: the right decision is the one that maximises expected quality-adjusted life years (which appropriately contracts to E[QALY]). Explicitly: Continue forward, killing its own driver in the first case and try to save the pregnant woman if assessing her survival odds at >= ~30%.

      That said. I'm not sure I would be able to make these right decisions were I actually in these circumstances.

      Like a (surprising) majority of people, I would prefer if *my* driverless car followed the directives 1: Keep your passengers safe 2: Maximise E[QALY], in that order, even if this 'wrong' decision algorithm means a whole bunch of pointless deaths. From my perspective, this is a cooperation issue. I assume everyone else is going to use the same algorithm and I'm not going to sacrifice to do the 'right' thing if I'm not receiving the benefits of others making the same sacrifice.
      If driverless cars reach saturation and the programmer/legislation can somehow enforce that every car only uses the 2nd maximise E[QALY] directive, I would be 100% on-board with this then.
      For the second scenario, I don't know if I'd be make the right decision only because I have little medical knowledge and am unable to estimate survival chances from given levels of bleeding. In this case, I have no qualms giving my agency to an agent which would have better capabilities in discerning which decision is actually serving my stated maximise E[QALY] objective.

      I believe there is a non-deontological objective morality system that is not majorly influenced social norms like ethical relativism. It's just a matter of formalizing this system.

  2. Thought provoking and a timing article.

    Fundamentally it illustrates the fact that human decision is more complex than just about how one (human or computer) maximise your positional advantage in a chess or Go game.

    To Evan's point, can ethics or morality ever be modeled or learnt by a machine is still an open question and that is indeed a worrying conclusion given the speed of change in the machine learning area.

Back to Top