ScienceDaily has an interesting article on the moral dilemmas that could face autonomous cars.
Iyad Rahwan of MIT’s Media Lab surveyed people on how safety rules should be programmed in these cars. People generally held a utilitarian view. For example, in a situation where a car could either career into a group of schoolchildren or smash into a wall, killing it’s occupants, most people believed it should do the latter. More lives are saved.
However, there’s a tragedy of the commons hiding behind this socially responsible facade. When asked if they would actually purchase a car that was programmed to act in this manner most people said “No.”
So basically promote the common good as long as I’m not at risk.
Asimov’s Three Laws of Robotics offer little guidance here, as they deal in absolutes rather than trade-offs. And it’s the trade-offs that generally provide us with the thorniest dilemmas.
Self-driving cars represent an early attempt to place hordes of robots in regular, direct contact with the general public. Until now, we’ve largely confined them to factories…or space…or used them in highly specialized environments (e.g. military operations). As robots start to move among us these ethical dilemmas will become increasingly significant.