On the Uber Autonomous Car Accident
Tech companies don’t have the patience to advance autonomy safely
Auto journalists and other various critics weighing in on last week's tragic Uber driverless car accident have split into two factions. Those of us who never want to see robotic cars take over blame Uber and its inattentive "safety driver", who appeared to be looking down, away from the road, when Elaine Herzberg suddenly appeared in front of the Volvo XC90 test vehicle in the dark of night in Tempe, Arizona. Those of us who either consider the rapidly developing technology a good thing for safety overall, or at least an inevitable thing, say no driver, human or otherwise, could have reacted in time to save Ms. Herzberg, who died of her injuries after she was taken to the hospital. Neither argument is wrong.
Autonomy skeptics have anticipated the problem of the first such accident for years. What happens when the driverless vehicle has to decide whether to swerve right and hit a bicyclist or pedestrian, or swerve left to hit oncoming traffic, instead?
Instead, the first fatality involved no such conundrum. Ms. Herzberg appeared suddenly, proverbially out of nowhere, and there is no indication from the video or photos released of the accident, which includes a video from a camera on Uber safety driver Rafael Vasquez's face, that there were any other obstacles that the Volvo might have hit had it swerved, or braked, suddenly.
The issue is not whether the safety driver should have reacted, or whether the pedestrian walking her bike should not have jay-walked—though these issues ultimately will determine "fault"—but whether the vehicle's sensors should have detected the pedestrian and bike before they appeared a few feet ahead of the headlamps. Ideally, robot-controlled cars don't have to face decisions of what to hit if they have the right combination of sensors and software to brake it to a stop before it gets too close.
I'd like to believe that automakers are taking such precautions before they let loose any such cars on public roads for testing. Bloomberg reported that in most cases, autonomous test cars have both a safety driver and a safety passenger in the front seats, the latter to monitor data on a laptop. It's not clear whether Mr. Vasquez was looking at such data when he missed spotting Ms. Herzberg at the earliest possible moment, but he was alone in the Uber test vehicle.
I've criticized Uber in the past for its aggressive testing of autonomous systems, first in the Pittsburgh area and then in Arizona, which has pushed its liberal driverless testing regulations to draw more automotive and technology companies to the state.
Arizona is not alone. Several other states, including Michigan, have opened the roads to autonomous testing, even as the American Center for Mobility in Ypsilanti, Michigan, continues to build its $2-million testing facility that will avoid placing the general public at risk with testing available in realistic, though controlled, conditions.
Volvo has had its own such facility in Sweden for years, though it's not nearly as sophisticated as the ACM will be when completed. The Swedish automaker has stepped up as the first to say it is responsible for any accidents when its vehicles are being operated autonomously. That assumes that a Volvo-designed system is at fault.
Uber effectively voided that warranty when it refitted its own robotic car technology to the XC90s. The question is, why? We know Uber is anxious to get rid of its human drivers and start making money. What gave this consistently ill-managed tech company the notion it could develop autonomy faster, more effectively, and at least as safely as Volvo could?
Meanwhile, Volvo is testing its own systems at a slower, more deliberate pace, and even scaled back its Drive Me program in which 100 Level 4 XC90s were to be distributed to families in its hometown of Gothenburg, Sweden, this year. Volvo's longstanding goal is to have zero fatalities in its cars by 2020.
Silicon Valley doesn't have the patience of Detroit or Stuttgart, Munich, Wolfsburg, Seoul, Tokyo, or Gothenburg, or the adverse relationship with safety advocates and regulators. A software crash never had the same implications as an auto crash, until now. Tesla CEO Elon Musk "discovered" autonomy earlier this decade and within a couple of years, his cars were leading the self-driving technology race until a Model S owner collided with a semi and died. Tesla was not held responsible, but now General Motors, whose SuperCruise not-quite-Level 3 autonomy is evolving slowly and carefully, and relies in part on 180,000 mapped miles of North American roads, is leading the tech race with the Cadillac CT6.
Like it or not, we'll see Level 4, not Level 5, autonomy some time in the next decade if automakers, and maybe Google's Waymo (which has patiently advanced the technology for more than a decade), are allowed to take, or regain the development lead. If companies like Uber are let loose testing high-level autonomy in the Wild West, it's only a matter of time before either regulators or public outcry brings robotic/driverless cars to a screeching halt.