The trolley problem is a classic ethical dilemma that asks what you would do if you had to choose between saving one person or saving many people from a runaway trolley. For example, would you pull a lever to divert the trolley from hitting five workers on the track, but instead hit one worker on a different track?
This problem is important to generative AI because it illustrates the challenges of programming machines to make ethical decisions that may involve human lives. For instance:
- How should a self-driving car decide who to save or harm in a crash scenario?
- How should a medical robot prioritise patients in an emergency?
- How should a military drone distinguish between combatants and civilians?
Different people may have different moral values and preferences, so there is no clear-cut answer to the trolley problem. Moreover, AI systems may not have all the relevant information or context to make the best decision. Therefore, it is crucial to ensure that generative AI systems are aligned with human values, transparent in their reasoning, and accountable for their actions.