The trolley problem and self-driving cars

The trolley problem is a famous thought experiment in philosophy.

You are walking near a trolley-car track when you notice five people tied to it in a row. The next instant, you see a trolley hurtling toward them, out of control. A signal lever is within your reach; if you pull it, you can divert the runaway trolley down a side track, saving the five — but killing another person, who is tied to that spur. What do you do? Most people say they would pull the lever: Better that one person should die instead of five.
 
Now, a different scenario. You are on a footbridge overlooking the track, where five people are tied down and the trolley is rushing toward them. There is no spur this time, but near you on the bridge is a chubby man. If you heave him over the side, he will fall on the track and his bulk will stop the trolley. He will die in the process. What do you do? (We presume your own body is too svelte to stop the trolley, should you be considering noble self-sacrifice.)

In numerical terms, the two situations are identical. A strict utilitarian, concerned only with the greatest happiness of the greatest number, would see no difference: In each case, one person dies to save five. Yet people seem to feel differently about the “Fat Man” case. The thought of seizing a random bystander, ignoring his screams, wrestling him to the railing and tumbling him over is too much. Surveys suggest that up to 90 percent of us would throw the lever in “Spur,” while a similar percentage think the Fat Man should not be thrown off the bridge. Yet, if asked, people find it hard to give logical reasons for this choice. Assaulting the Fat Man just feels wrong; our instincts cry out against it.

Nothing intrigues philosophers more than a phenomenon that seems simultaneously self-evident and inexplicable. Thus, ever since the moral philosopher Philippa Foot set out Spur as a thought experiment in 1967, a whole enterprise of “trolley­ology” has unfolded, with trolleyologists generating ever more fiendish variants.
 

There are entire books devoted entirely to the subject, including the humorously titled The Trolley Problem, or Would You Throw the Fat Guy Off the Bridge: A Philosophical Conundrum or the similarly named Would You Kill the Fat Man?: The Trolley Problem and What Your Answer Tells Us about Right and Wrong. If the obese don't have enough problems, they also stumble into philosophical quandaries merely by walking across bridges at inopportune moments.

In the abstract, the trolley problem can seem frivolous. In the real world, however, such dilemmas can prove very real and complex. Just around the corner lurks a technological breakthrough which will force us to confront the trolley problem once again: the self-driving car.

Say you're sitting by yourself in your self-driving car, just playing aimlessly on your phone while your car handles the driving duties, when suddenly a mother and child step out in front of the car from between two parked cars on the side of the road. The self-driving car doesn't have enough time to brake, and if it swerves to avoid the mother and child, the car will fly off a bridge and throw you to certain death. What should the car's driving software be programmed to do in that situation?

That problem is the subject of an article in Aeon on automated ethics.

A similar computer program to the one driving our first tram would have no problem resolving this. Indeed, it would see no distinction between the cases. Where there are no alternatives, one life should be sacrificed to save five; two lives to save three; and so on. The fat man should always die – a form of ethical reasoning called consequentialism, meaning conduct should be judged in terms of its consequences.

When presented with Thomson’s trolley problem, however, many people feel that it would be wrong to push the fat man to his death. Premeditated murder is inherently wrong, they argue, no matter what its results – a form of ethical reasoning called deontology, meaning conduct should be judged by the nature of an action rather than by its consequences.

The friction between deontology and consequentialism is at the heart of every version of the trolley problem. Yet perhaps the problem’s most unsettling implication is not the existence of this friction, but the fact that – depending on how the story is told – people tend to hold wildly different opinions about what is right and wrong.

Pushing someone to their death with your bare hands is deeply problematic psychologically, even if you accept that it’s theoretically no better or worse than killing them from 10 miles away. Meanwhile, allowing someone at a distance – a starving child in another country for example – to die through one’s inaction seems barely to register a qualm. As philosophers such as Peter Singer have persuasively argued, it’s hard to see why we should accept this.
 

If a robot programmed with Asimov's Three Laws of Robotics were confronted with the trolley problem, what would the robot do? There are long threads dedicated to just this question.

Lots of people have already foreseen this core ethical problem with self-driving cars. I haven't seen any consensus on a solution, though. Not an easy problem, but one that we now have to wrestle with as a society.

Or, at least, some people will have to wrestle with the problem. Frankly, I'm happy today when my Roomba doesn't get itself stuck during one of its cleaning sessions.