This article is a great reminder that reports of philosophy’s demise have been greatly exaggerated. How would we program a self driving car to deal with the famous trolley problem? Sit down, eager utilitarians, this gets complicated quickly, a sample:
“Here is the nature of the dilemma. Imagine that in the not-too-distant future, you own a self-driving car. One day, while you are driving along, an unfortunate set of events causes the car to head toward a crowd of 10 people crossing the road. It cannot stop in time but it can avoid killing 10 people by steering into a wall. However, this collision would kill you, the owner and occupant. What should it do?
One way to approach this kind of problem is to act in a way that minimizes the loss of life. By this way of thinking, killing one person is better than killing 10.
But that approach may have other consequences. If fewer people buy self-driving cars because they are programmed to sacrifice their owners, then more people are likely to die because ordinary cars are involved in so many more accidents. The result is a Catch-22 situation.”
So quickly it becomes more than just “minimize loss of life” – and how is this to be resolved by an autonomous machine?