The first death in a self-driving car occurred in May 2016. It was in a Tesla Model S travelling along a Florida highway.
Apparently the car’s Autopilot system got confused by a white truck against a bright sky and failed to brake. When AI systems fail, they sometimes fail big.
Tesla noted in a press release this is the first fatality in 130 million miles, compared to an average of one fatality per 94 million miles of regular driving. While those numbers are on the right side, they don’t seem high enough to convince people to give up control.
Of course, this is both a technological progress and a systems problem. AI in cars is only going to get better. And, as we have more self-driving cars on the road, their predictability, and ability to communicate their intentions, should lead to quantum leaps in safety. Of course there’s always the fear of a system-wide failure that would result in the worst pileup in history.
What I found strange was another point Tesla made in the press release—that the self-driving feature is in beta and only designed to be semi-autonomous. Their definition of semi-autonomous is that you need to keep your hands on the steering wheel. That would seem to suggest that all their customers have been recruited as testers on public roads! Sitting with my hands on the steering wheel while my car drives would, I think, dull my reaction times to an unsafe level.