When it comes to self-driving cars, what’s safe enough?

self-driving car

As autonomous cars take to the streets with empty driver’s seats, experts are still debating: How safe is “safe enough” for self-driving vehicles?

chombosan/Shutterstock

Self-driving vehicles passed a major milestone in November when Waymo’s minivans hit the streets of Phoenix without backup human drivers — reportedly making them the first fleet of fully autonomous cars on public roadways. Over the next few months, people will get a chance to take these streetwise vehicles for a free spin as the company tries to drum up excitement — and a customer base — for its launch of a driverless taxi service.

But even as these cars are ditching human supervisors, many people doubt the safety of machine motorists. A whopping 85 percent of baby boomers and even 73 percent of millennials confess to being afraid to ride in self-driving cars, according to a recent AAA survey. And while Waymo claims its vehicles are designed to be the world’s most experienced drivers — based on road tests as well as clocking millions of virtual miles — there’s still no consensus among experts about how safe is “safe enough” when it comes to street-smart cars.

It’s especially difficult to tell whether self-driving cars have earned their licenses when scientists are still writing the driver’s test.  

Besides the sheer convenience of being able to take your hands off the wheel, the major appeal of self-driving cars is safer roadways. After all, mechanical chauffeurs can’t get drunk or distracted — factors involved in 29 and 10 percent of fatal accidents, respectively. But the only surefire way to evaluate autonomous cars’ reliability is test-driving them in real traffic, explains Nidhi Kalra, an information scientist at the RAND Corporation in San Francisco. “I think a lot of people were thinking, ‘Oh, we’ll just wait until the companies do enough test-driving,’” she says. “You could wait until the next millennium until that happens.”

In a 2016 study, Kalra and a colleague showed that self-driving cars would have to trek hundreds of millions or perhaps billions of miles to demonstrate with comfortable certainty that they caused fewer fatalities than the average person (about 1.1 per 100 million miles driven). Based on the current number of self-driving cars, that task could take decades or centuries to complete.

Tech developers hardly have that kind of time, so companies like Waymo assess their vehicles’ safety by pairing real driving time with practice on a private track and millions of miles a day in computer simulations.

Still, simulations can’t replace the value of actual road experience, says Philip Koopman, an electrical and computer engineer at Carnegie Mellon University in Pittsburgh. “What about the scenarios they didn’t know [to simulate]?” he says. “Weird, weird, weird stuff happens out on the roadways.”

Since current self-driving safety assurances aren’t exactly airtight, Koopman argues that self-driving cars should be held to a way higher standard than human drivers — say, 10 times safer than the average human — before they’re given the green light. That would provide enough wiggle room in the margin of error to assume that the driverless car actually is safer, Koopman reasons.

But getting to that point could take a long time, and miss the opportunity to save many lives, Kalra says. She’s confident because her team forecast a future — actually lots of different futures — where self-driving cars hit the road when they were 10, 75 or 90 percent safer than the average human driver. At 10 percent, fatalities drop to one death per 100 million miles. Maybe that doesn’t seem like a lot, but with those cars much closer to being ready to roll, some 500,000 lives could be saved between 2020 and 2050, the team forecasts, compared with the imagined futures where people hold out for way higher safety standards.

But just aiming for 10 percent safer doesn’t provide much margin for error, Koopman argues. “You’re cutting it pretty close.”

And a lower safety standard could mean more accidents at first — and a public backlash, says Azim Shariff, a psychologist at the University of California, Irvine. People may be less inclined to accept mistakes made by machines than humans, and research has shown that people are more risk-averse when it comes to risks that they can’t control.  

“What happens when a 4-year-old in the back of a car that’s operated by her mother gets killed by an autonomous car?” Shariff asks.

Success depends on buy-in. “So public opinion is really going to matter,” Shariff says.

Right now, most Americans may not be lining up to hop aboard fully autonomous cars. But “once people start knowing people who have been in them and lived to tell the tale, so to speak, I think it will change quickly,” says David Groves, a policy analyst at the RAND Corporation in Santa Monica, Calif.

Kalra also suspects that people will fear autonomous cars less when the National Highway Traffic Safety Administration establishes a self-driving car safety rating, like its crash test ratings for traditional cars. That kind of rating system “will probably come after the technology is on the road, just as it did for regular cars,” she says. “We didn’t have a safety rating system when the Model T came out. It sounds like it’s the cart before the horse to have cars before safety ratings, but that’s often how it happens.”

Previously the staff writer for physical sciences at Science News, Maria Temming is the assistant managing editor at Science News Explores. She has bachelor's degrees in physics and English, and a master's in science writing.