The Korea Herald

피터빈트

[Leonid Bershidsky] Ask voters if they want more driverless cars

By Bloomberg

Published : March 22, 2018 - 17:32

    • Link copied

There’s ugly symbolism in the deadly accident that took place in Tempe, Arizona, late on Sunday evening. A self-driving Volvo operated by Uber ran over Elaine Herzberg, 49, as the apparently homeless woman pushed a bicycle loaded with plastic bags into the street.

Even though Tempe police are not inclined at this point to blame the Uber vehicle -- Herzberg apparently stepped into the road suddenly from the shadows -- optics such as these are likely to set back the autonomous vehicle industry. And that’s not a bad thing. This is still a world populated and run by humans, and we humans should be given more time to decide whether we want machines to take over our roads. The issue is ethical as much as technological.

The developers like to point out that, according to the US Department of Transportation, 94 percent of vehicle crashes are caused by driver error, and of these, 41 percent occur due to “recognition error,” meaning the driver’s failure to notice something was about to go wrong. Remove these errors, and hundreds of thousands of lives will be saved. Then there are “decision errors” -- driving too fast, illegal maneuvers -- accounting for 33 percent of the crashes; bad driving technique and falling asleep at the wheel follow, with 11 percent and 7 percent of driver-caused crashes. If one assumes that autonomous vehicles are in perfect control of the environment, programmed to obey rules and able to drive better than the average human, millions of lives can be saved every year. And, of course, robots never fall asleep at the wheel.

So far, crash statistics show that autonomous vehicles get into fewer crashes than those driven by humans. But these statistics are skewed, probably both ways. On the one hand, humans don’t report all accidents to the police, while every self-driving vehicle accident is scrupulously recorded. On the other, the autonomous cars are new vehicles in perfect technical condition, and they are mostly driven in US cities with a warm climate and streets laid out on a perfect grid. The assumption of greater safety still requires a leap of faith, especially when it comes to fatalities. The 2016 rate is 1.18 road deaths per 160 million kilometers driven.

The biggest driverless car testing programs, run by Alphabet’s Waymo and by Uber, have covered a total of about 13 million kilometers -- and there’s already a fatality.

Since Google blamed humans for pretty much all accidents involving its self-driving cars in 2015, there has been a lot of discussion about apportioning blame for the accidents. If Elaine Herzberg’s death is deemed to be her own fault, driverless cars may be allowed to go on as before. Blame attribution is never quite binary, though. Human drivers will sometimes hit autonomous cars because they don’t drive like humans and other drivers fail to recognize the logic of their behavior; technically, it’s the humans’ mistake -- but on a deeper level, also that of the driverless car designers.

That, however, is not the whole story. Take what local police have said about the Tempe case. Uber’s Volvo was apparently doing 60 kph in a 55 kph speed limit area when it was so dark that a person stepping into the road from the shadows was a complete surprise. The safety driver at the wheel of the vehicle -- who wasn’t driving at the time of the crash -- said she wouldn’t have been able to avoid hitting Herzberg, either. But then, she’s only human. Shouldn’t a robot programmed for total safety, to save lives above all, have gone much slower than the permitted speed if darkness prevented it from remaining in full control of its environment? And, even if the accident was Herzberg’s fault, would going as fast as the car did qualify as a “decision error”? In general, is there a way to program a machine to take into account the whole complexity of any real-life situation and react to it better than a human does?

That’s a matter of ethics as much as technology. In a recent paper, Dieter Birnbacher of the University of Duesseldorf in Germany and Wolfgang Birnbacher of IBEO Automotive Systems listed some of the questions to ask:

Programming a certain risk behavior into a machine not only has consequences in critical situations but also defines the driving style generally. How safe is safe enough? How safe is too safe? Excessive safety would paralyze road traffic and seriously hamper acceptance of autonomous vehicles. Giving leeway to risky driving styles would jeopardize the safety objectives. How egalitarian does an automatized driving system have to be? Is a manufacturer allowed to advertise with fast cars at the price of lowered safety for other road users?

That last question has a direct relevance to Uber. As a taxi company, it has an interest in maximizing the number of rides and thus having its vehicles drive as fast as possible. Even given the need to abide by speed limits, is that conducive to lowering the speed when it’s dark or when weather impairs visibility and increases reaction times?

Driverless vehicle developers stress that they are 100 percent focused on safety. Somehow, though, US consumers aren’t convinced. According to a 2017 Pew Research poll, 56 percent of them wouldn’t ride on a driverless car if offered the chance; most of them just wouldn’t trust it enough. Nine percent of those asked say they enjoy driving and they don’t want their emotional relationship with cars ruined.

Of course, that reluctance could change. According to the American Automobile Association, millennials are especially susceptible to the appeal of driverless, with “only” 49 percent of them reporting they’d be afraid to ride in an autonomous vehicle. But even among the young people, there’s no strong majority demand to put more driverless cars on the road.

The right way to go about it would be to ask voters in every city that considers allowing autonomous car tests whether they want to allow them, and if so, in which functions: As taxis, as delivery vehicles, as special transport for the elderly and the handicapped. I suspect the latter options will get more understanding and more support. That would be a place to start convincing the general public that autonomous cars can be good. So far, the jury is deservedly still out.


Leonid Bershidsky
Leonid Bershidsky is a Bloomberg View columnist. -- Ed.

(Bloomberg)