There's a thought experiment in ethics called the Trolley Problem, which generally goes like this (as Wikipedia relates):
“There is a runaway trolley barreling down the railway tracks. Ahead, on the tracks, there are five people tied up and unable to move. The trolley is headed straight for them. You are standing some distance off in the train yard, next to a lever. If you pull this lever, the trolley will switch to a different set of tracks. However, you notice that there is one person on the side track. You have two options:
- Do nothing, and the trolley kills the five people on the main track.
- Pull the lever, diverting the trolley onto the side track where it will kill one person.”
The question is which of these two options is the ethical choice? With option 1, you're allowing five people to be killed but are not actively involved in making a decision about killing someone. On the other hand, if you pull the lever, only one person is killed – yet you are actively involved in the killing.
This moral dilemma becomes more than just a thought experiment with increasing interest in automation in complex environments. The obvious example is autonomous vehicles, which are expected to employ many sensors to combine awareness of the environment with geographic data to be able to navigate without the intervention of a human. One of the promises of autonomous vehicles is increased safety via reduced collisions and improved traffic flow.
It's easy to see how the trolley problem is applicable to autonomous cars. Let’s say a self-driving car is in a situation where it is about to collide with a school bus. One simple adjustment will change the car’s trajectory so that instead of hitting the bus it will hit a single pedestrian crossing the street. But this case is different than the traditional trolley problem – because instead of a person making the decision, it's an algorithm that's presented with the choice based on the sequences of sensor and event data being streamed into the system.
The autonomous car is not the only example where an event processing system becomes a proxy for making difficult decisions. Some other examples might include:
- A power utility system is expected to automate the restoration of power after a power outage. A hospital desperately needs its power restored for continued emergency room operations, but it would require shutting down some residential service including a home where a sick person needs continuous life-support machinery.
- A factory floor is on fire with a risk of spreading to a nearby office building with hundreds of workers. An automated system can cut off oxygen to the factory floor and shut all vents to kill the flames, but there are still a few people on the factory floor.
The real questions revolve around the application developers’ approaches to implementing the event stream processing system, the predictive models used, and the quality of the information that's being streamed. This last issue is critical. If life and death decisions are being made based on situational awareness driven by data, we cannot allow there to be any risk that incorrect data will force a deadly choice when the reality did not require that choice to be made.
In my next post we'll look at some different scenarios that need to be considered.Download – How Streaming Analytics Enables Real-Time Decisions