The Hurdles Facing Autonomous Vehicles

Near, but so far away.

Tesla Motors CEO Elon Musk saunters onstage in front of 3,000 admiring Silicon Valley techies at a conference devoted to graphics processing units. Jen-Hsun Huang, CEO of Nvidia, the conference’s sponsor, has just unveiled
a new GPU designed for automotive applications and invited Musk—“the engineer of engineers,” as he calls him—to riff on the future of autonomous cars.

Musk starts cranking out TED Talk-ready sound bites. “I almost view it as a solved problem. We know exactly what to do, and we’ll be there in a few years. We’ll take autonomous cars for granted in quite a short period of time,” he says matter-of-factly. “You’ll be able to tell your car, ‘Take me home,’ go here, go there, or anything, and it’ll just do it. It’ll be an order of magnitude safer than a person. In the distant future, people may outlaw driving cars because it’s too dangerous. You can’t have a person driving a two-ton death machine.”

Tesla Motors  CEO Elon Musk sees a future where humans don’t operate cars. Will robopods dominate instead?

There are no gasps in the packed auditorium, no expressions of surprise or skepticism. On the contrary, Musk’s pronouncements are accepted as statements of self-evident truth. After years of hype about autonomous vehicles, there’s a sense not only here at the conference but also among the general public that driverless cars are a done deal.

The next morning, I run Musk’s remarks past Steve Shladover, a stalwart transportation researcher at the University of California, Berkeley. “Ridiculous,” he mutters as he rolls his eyes. Shladover is the anti-Musk. His vibe is less hip technocrat, more pocket-protector science guy. But unlike Musk, he’s been studying automated vehicles for 40 years. In 1997, he spearheaded stunning demonstrations of driverless cars on the I-15 freeway in San Diego, and he’s now working on big rigs that use vehicle-to-vehicle communications to draft each other in highway platoons.

Contrary to Musk and many of the most prominent advocates of autonomous cars, Shladover insists that so-called Level 5 vehicles—robocars that require no human input—are not on the horizon. “I tell adult audiences not to expect it in their lifetimes. And I say the same thing to students,” he says. “Merely dealing with lighting conditions, weather conditions, and traffic conditions is immensely complicated. The software requirements are extremely daunting. Nobody even has the ability to verify and validate the software. I estimate that the challenge of fully automated cars is 10 orders of magnitude more complicated than [fully automated] commercial aviation.”

Despite headlines and a zeitgeist reinforcing the notion that driverless cars are inevitable, there is no clear roadmap for how we’re going to get from contemporary automobiles offering autonomous aids—adaptive cruise control, lane-following systems, and so on—to vehicles that require no human input other than providing a destination. Major challenges loom on the technological, political, economic, and legislative fronts. There’s widespread disagreement about how autonomous systems will work, how they will integrate into a mature transportation system that already embraces more than a billion nonautonomous vehicles, and even what the true benefits will be.

If auto manufacturers want to play in this field, they’re going to be responsible for everything that happens on autopilot.

Safety is invariably cited as the most compelling argument in favor of autonomy. More than 30,000 Americans are killed annually in car crashes, and human error is a factor in roughly 90 percent of them. That’s not good, obviously. But it shouldn’t obscure
the fact that human drivers, despite marginal training and myriad distractions, do a remarkably good job of avoiding accidents. Shladover calculates the number of fatalities at roughly one for every 3 million hours of driving. Would fully automated vehicles perform any better than humans? That’s up for debate. “The old ‘blue screen of death’ won’t just be a figure of speech anymore. It could mean that somebody actually dies,” he says. “Silicon Valley people don’t understand the automobile industry. You can’t throw something on a vehicle and let your customers beta-test it for you. It just doesn’t compute.”

The machines that are supposed to squire us around in faultless safety aren’t going to be controlled by some magically capable, as-yet-undeveloped form of artificial intelligence but by millions of lines of code written by very human programmers. Do not expect perfection. As Stanford University mechanical engineering professor Chris Gerdes puts it, “You’re not going to eliminate human error. You’re just shifting it from the driver to the programmer.”

Most drivers would love to turn on auto mode and tune out, but cars like Stanford’s Audi TTS still need a trunk full of gear to operate.

Gerdes doesn’t have a gripe with driverless cars. As director of the Center for Automotive Research, he championed the development of a supremely cool autonomous Audi TTS that laps racetracks as quickly as race drivers. But despite all the headline-making progress, he says, “The more research we do with human beings, the more impressed I am with our eyes and our brains.”

The magnitude of the challenges facing autonomous vehicles becomes clearer when you consider all the finely nuanced decisions we make on a second-by-second basis. When we see an oncoming car drifting into our lane, we don’t immediately default to DEFCON 1 because we realize a head-on collision probably isn’t imminent. When we see a ball bounce into the street, we’re extra vigilant because we know a child might be a few steps behind it.

Maybe—and it’s a big maybe—human perception can be programmed into a car. But in one critical way, autonomous vehicles are fundamentally different from those driven by human beings. When we’re faced with a situation where an accident is unavoidable, we react instinctively. We may be right, or we may be wrong, but we haven’t decided what to do ahead of time. An autonomous car, on the other hand, will respond according to its software, and it’s therefore fair to evaluate the moral reasoning embedded in its programming.

“The ethics are different for humans and machines,” says Patrick Lin, a philosophy professor at California Polytechnic State University, San Luis Obispo. “For example, a military robot doesn’t have to shoot back at an enemy because there’s no imperative for self-preservation. If auto manufacturers want to play in this field, they’re going to be responsible for everything that happens on autopilot. Programmers will have to anticipate every problem and code for it. But when you face a true ethical dilemma, there is no consensus on the correct answer.”

Noah Goodall, a research scientist at the Virginia Center for Transportation Innovation and Research, offers a textbook illustration of the paradoxes that await in the brave new world of autonomous transportation. Say your robotic car finds itself in a worst-case situation where it has to hit either a motorcyclist wearing a helmet or one riding bareheaded. Logic suggests that the biker without headgear is less likely to survive a crash. But choosing to nail the more responsible rider seems like
a perverse outcome.

This sounds like the esoteric stuff of a philosophy seminar or a Netflix science-fiction series. But in fact, Google recently received a patent for software that evaluates maneuvers on the basis of the probability and severity of an accident that might result. Eventually, it will mean assigning values for various types of injuries—say 20 points for killing a pedestrian as opposed to 18 for maiming three passengers. No doubt, this information will be fair game when product-liability lawsuits are litigated.

Speaking of lawyers, they aren’t getting lathered up over the ethics issue, at least not yet. The general consensus in the legal community seems to be that the existing tort-liability system is flexible enough to deal with autonomous cars. This is good because the rise of the autonomous car is likely to inspire accident litigation. What’s more, these lawsuits are bound to involve large sums of money because, whenever possible, the manufacturer rather than the driver will be blamed. “Product-liability policy limits are much higher than typical auto-policy limits,” explains Hilary Rowen, a San Francisco attorney who specializes in insurance issues. “All parties to an accident lawsuit are going to have a predilection for it to be considered product liability. The accident victim wants a deeper pocket; the driver wants to avoid ‘at fault’ points that could increase insurance rates.”

Risk is inevitable with Level 3 autonomy, where drivers have to be ready to take control in emergencies. It would be best if they were paying attention to the road at the time. But the whole point of autonomy is allowing drivers to chill out while doing stuff other than driving, whether reading “War and Peace” or sexting. And it’s safe—or unsafe, as the case may be—to assume some drivers will zone out completely.

“If the driver is disengaged, I’ve seen one study that suggests that it can take as long as 7 seconds for him to become aware of the environment he’s operating in,” says Walter Sullivan, a human-machine interface expert at Elektrobit, a Finnish engineering company specializing in wireless and automotive tech. Imagine the carnage a 2-ton vehicle can wreak during 7 uncontrolled seconds on a freeway or in a city center. Such scenarios are why some advocates of autonomous vehicles, notably the folks at Google, want to avoid the risks of Level 3 vehicles by leapfrogging directly to Level 4, where the vehicle comes to a full stop for you.

Daimler Trucks North America recently unveiled a Freightliner big rig that’s the first truck approved to drive autonomously in the United States. But president/CEO Martin Daum emphasizes—repeatedly—that the driver must remain alert and ready to grab the steering wheel at all times, as with other Level 3 autonomous systems, and the company has no plans to explore Level 4 autonomy during the next decade. As for driverless trucks, Daum insists that the notion is completely off the table. As he puts it: “The human brain is still the best computer money can’t buy.”

Wearing a unique red license plate, the Freightliner Inspiration will be tested autonomously on limited stretches of interstate highway in Nevada, which in 2011 became the first state to adopt regulations for robotic vehicles. In California, the site of the lion’s share of American robocar activity, testing regulations went into effect last year. But the state has already missed the deadline for issuing regulations for vehicles that can be sold to the public. And, says Bernard Soriano, deputy director of the California Department of Motor Vehicles, “I really can’t give you a date as to when we’re going to be finished.”

The major sticking point is the lack of standards governing the validation of autonomous systems. The National Highway Traffic Safety Administration has shown no interest in promulgating regulations, which leaves the ball in the manufacturers’ court. “The technology is going to beat the law,” says Bryant Walker Smith, a law professor at the University of South Carolina School of Law. “Typically, federal mandates for new technology arrive only after it dominates the market.” (See seat belts, air bags, backup cameras.) But Smith says the onus will be on automakers to convince not only regulators but also consumers that their systems are safe.

Good luck with that. Like any disruptive technology, autonomous vehicles come with a lot of scary baggage and raise a host of perplexing questions. How will they be secured against hackers bent on turning them into weapons? How well will they play with one another and vehicles driven by human beings? Is there enough public support and taxpayer money to create dedicated lanes or restricted zones for autonomous vehicles? Ultimately, do we want a transportation system that turns us into nothing more than cargo?

Autonomous systems shouldn’t be feared or resisted. Used appropriately, they’ll make driving safer and less of a burden. They make sense in low-speed and self-contained environments where vehicles can simply come to a stop if something goes wrong and on smoothly flowing stretches of highway. So look for driverless cars on campuses and airports. Urban jitneys in city centers. Driverless valet parking. Hands-off driving in traffic jams and on some freeways. But for the next few decades, it’s highly likely that the only vehicles able to whisk us from Point A to Point B while we doze will still be taxis and limos. Who would have thought a Lincoln Town Car summoned via an Uber app would be the car of the future?

Transportation—On Demand

When automobiles first appeared, they were called horseless carriages because nobody recognized their transformative potential. Similarly, most people think of autonomous vehicles simply as driverless cars when in fact they potentially represent a revolutionary new form of transportation.

“Don’t think of them as cars; think of them as computers on wheels,” Brad Templeton says. “They’ll be a service like your phone, which you throw out every few years even though it’s perfectly good.”

Templeton, a Silicon Valley software engineer and serial entrepreneur, believes driverless cars—if they become commonplace—will transform not only personal mobility but also the world we live in by creating a powerful system of on-demand transportation.

“Today, when a person walks into a dealership, the basic question is, ‘What kind of car do I need for my life?’ ” he says. “When you move to a world where you have vehicles on demand, the question becomes, ‘What vehicle do I need today?’ ”

Templeton foresees tiny pods for crosstown travel, sedans for family outings, RVs for road trips, robotrucks for transcontinental shipping, delivery bots for retail deliveries, and so on. Since traditional selling points such as acceleration, handling, and cargo space will no longer be prized, designers will focus on creating superefficient, ultra-comfortable vehicles optimized for functions ranging from work to entertainment to sleeping—everything other
than driving.

Not that Templeton believes human drivers will be outlawed. “There will always be people who drive recreationally,” he says. “After all, there are people who still ride horses even though they don’t use them for long-distance transportation.”

How Cars Will “See” Like a Human

Robotic drivers will behave much like their human counterparts: They will see the road (with sensors), plan their routes (using algorithms), and manipulate the brakes, throttle, and steering wheel to act accordingly (through the onboard computer).

The third leg of this triad—physically driving the vehicle—will be the easiest to implement since virtually all controls are already linked electronically. Mimicking the eyes and brain of a human is a much more complicated proposition.

Video cameras are the most obvious analog for the human eye. Cameras are cheap and simple. But the images they produce are compromised by weather conditions, and video isn’t very good at determining distance. So cameras are usually augmented by radar.

Radar works best detecting metallic objects in motion, and it isn’t affected by weather, which is why it’s already widely used in adaptive cruise control units. But it struggles to identify pedestrians and stationary objects, and it doesn’t offer the resolution required to accurately locate vehicles on a GPS-based map.

Lidar—an alternative take on radar using lasers to analyze light waves instead of radio waves—lacks radar’s range and, like cameras, is sensitive to weather conditions. But it creates a richly detailed 3-D map of the world it senses. The drawback is price: Top-of-the-line 64-laser rotating lidar systems retail for about $40,000, though prices will drop as the tech matures.

Still, lidar alone can’t do the job, so it’s deployed in a suite of sensors. For example, the Audi A7 that drove semi-autonomously from Silicon Valley to Las Vegas earlier this year carried two lidars, two short-range radars, four midrange radars, two long-range radars, four top-view cameras, one 3-D camera, and four ultrasonic sensors.

Software “fuses” data from these sensors into comprehensible form. Objects then must be located—accurately, to within a few centimeters—on a new generation of 3-D, high-def maps being developed by companies such as Nokia. “We’re the engine room of the system,” says John Ristevski, vice president of reality capture and processing for Nokia’s Here mapping unit.

Although many cars are already equipped with highly capable route-planning algorithms, collision-avoidance software is still in its infancy. The standard approach is simply to avoid hitting anything in the path of travel. But driverless cars will demand more sophisticated algorithms to simulate the human brain’s nuanced thinking.

Considering the current state of artificial intelligence, it’s hard to imagine A.I. playing a major role in autonomous vehicles anytime soon. But the development of “neural nets” have enabled breakthroughs in machine learning, and there’s hope that driverless cars will carry image identification software that can distinguish a motorcycle from a bicycle and a pedestrian from an orangutan that’s just escaped from the local zoo.

Robots already have demonstrated near-perfect car control, and they don’t get drunk, tired, bored, stressed, or distracted. So if humans can program machines to think in a meaningful way, they may just render human drivers not merely optional but obsolete.

F 015, Meet Pedestrian

We motored around in the Mercedes-Benz F 015 in a big, empty lot not long after the concept car’s introduction at the 2015 Consumer Electronics Show. It’s more transportation pod than car, and the interior space combines the ambience of a lounge with the utility of an office. But the F 015 also serves as a showcase for Mercedes’ vision of how cars and people will interact by 2030. As Mercedes sees it, in high-density urban and suburban areas, traffic barriers will fall and vehicles will literally occupy the same space as pedestrians. While shuttling people to their destinations, cars will feature onboard sensors that will read and interpret human gestures. Then, through the use of LED displays, the car will make its intentions clear to pedestrians. Mercedes engineers tell us that the cameras, sensors, and hardware required to make the F 015 fully autonomous aren’t much different from what is available on today’s S-Class. The big step, they say, lies in combining far greater computing power with the algorithms required to help the machine understand the body language of us unpredictable biological life forms.

“Every second year you will see the next significant step in our approach toward fully autonomous cars,” says Daimler CEO Dieter Zetsche, “and this would go with major car launches that we are planning. We want to lead that development, though people expect for us to be responsible with what we are doing.”

Bobby Takes Hockenheim

Audi evaluated a recent effort in the autonomous field in typical German fashion: It went to a racetrack. In October 2014, the 560-hp Audi RS 7 Piloted Concept reached nearly 140 mph while lapping Germany’s Hockenheim circuit. Two cars had been prepared, one called “Bobby” to celebrate three-time Indianapolis 500 champion Bobby Unser, and the other named “AJ” to honor four-time Indy 500 champion A.J. Foyt Jr. No one was in Bobby while it recorded its lap. Why is Ricky Hudi, Audi’s executive vice president for electronic development, so sure that obstacles to autonomous driving will be overcome? “It will save lives,” he says.

Cross Country, (Almost) Hands-Free

In a promotion for Tier 1 supplier Delphi’s new all-in-one computer processor that can integrate safety functions with autonomous driving capability, six Delphi engineers drove from San Francisco to New York in March 2015 in a specially equipped Audi SQ5. The engineers reported that the car did the driving for all but 50 miles of the nine-day, 3,400-mile trek. Motorists in the vicinity weren’t always pleased since the car stuck to posted speed limits (who drives like that?), but things went smoothly. The Delphi Audi relied on six long-range radars, four short-range radars, three vision-based cameras, six lidars (laser-based radar systems), a localization system, intelligent software algorithms, and a full suite of active safety systems—a kitchen sink’s worth of tech that underscores how much extra equipment automated cars will likely need. Delphi is also testing a system that could detect an open adjacent lane in order to safely pass slower traffic in its own lane.

Super Cruisin’

Cadillac’s Super Cruise (which might be renamed) is a semi-autonomous feature that Cadillac says will be available as early as the 2017 model year. Super Cruise “will behave like adaptive cruise control in that you set the speed when you get on the freeway,” says John Capp, director of electric and control systems research for General Motors. Unlike adaptive cruise control and lane keeping assist systems already on the market, Cadillac says you will be able to drive hands-free until you change lanes or head toward an exit, though you will have to remain alert and “in control” of the car at all times. Cadillac’s haptic lane-departure system in the driver’s seat will warn you when it thinks you must regain control. GPS will determine whether you’re on an automation-worthy road, and weather will play a role in if the system can be activated.

Drive Me (31 Miles)

The next step toward Volvo’s goal of no injuries or deaths in its cars by 2025 is its upcoming Drive Me program. One hundred Volvo XC90 customers in Gothenburg, Sweden, will be given vehicles equipped with seven radar sensors including four corner radars, a trifocal camera, and cloud-based vehicle-to-vehicle mapping to allow a 50-kilometer (31-mile) stretch of autonomous driving. Drive Me will not require the driver to pay attention. “Drivers do not have to supervise and can do other things,” program director Marcus Rothoff says.