Opinion

Elon Musk: Facts and Accountability are Totally Boneheaded. Next.

Tesla CEO browbeats media and analysts while dodging questions and responsibility

If you’re one to base your decisions and actions on evidence rather than providence, these are hard times for understanding certain facts—times made all the harder by further forays from present technological realities like those made by Tesla CEO Elon Musk this week during Tesla’s first-quarter earnings conference call.

Investors initially responded favorably to Wednesday’s call, thanks in part to the news of Tesla losing less money than expected (shares were up about 2 percent on a loss of $3.35 per share, less than the expected $3.54 loss). But after-hours trading saw Tesla’s stock tumble nearly 5 percent, and the negative trend continued when the markets opened on Thursday.

Why the downturn? Musk’s dismissive attitude toward key concerns could be at least partly to blame. Antonio M. Sacconaghi of Sanford C. Bernstein & Co. LLC pressed Musk on capital expenditures projections and near-term capital requirements—legitimate concerns for investors, especially given that Tesla’s historical performance and current ramp rate don’t jibe with Musk’s categorical conclusions. This prompted Musk to interject with (per Seeking Alpha’s transcript of the call), “Excuse me. Next. Boring bonehead questions are not cool. Next?”

The “next” that followed met a similar fate. Joseph Spak of RBC Capital Markets asked about existing Model 3 reservations, seeking a percentage of prospective buyers who’ve taken the next step of configuring their reserved vehicles. Musk again dismissed the question as “dry.” “They’re killing me,” Musk said.

Then Musk got up on his virtual soap box and took a swipe at the media.

“The thing that’s tricky with autonomous vehicles is that autonomy doesn’t reduce the accident rate or fatality rate to zero,” he said. “It improves it substantially, but the reality is that even though we think our—we think autonomy, even car autonomy reduces the probability of a death by 30 percent, which would be incredible because there’s like—broadly there’s over 1 million, I think 1.2 million automotive deaths per year.” (The U.S. Center for Disease Control in late 2016 estimated the number at 1.25 million.)

Taking Musk at his word, and assuming every car in the world were an autonomous Tesla, that would be a monumental saving of life and bodily injury. Of course, only a tiny handful of cars in the world are Teslas, or will ever be, even assuming the company one day achieves its targeted production rates. And of course, the vast majority of individuals even in the world’s most developed economies can’t afford a Tesla at today’s prices (so far, it seems not a single Model 3 has been sold anywhere near its $35,000 base price), let alone in the world’s less-developed economies. And of course, the roughly 1 billion cars already built and driving around in the world aren’t going to disappear overnight. So Musk’s 30 percent figure, even including eventual fully autonomous offerings from other makers, seems rather optimisitic for the quite forseeable future, even if the theory holds true.

“However but, if it’s an autonomous situation, it’s headline news, and the media fails to mention that—actually they shouldn’t really be writing the story, they should be writing the story about how autonomous cars are really safe, but that’s not the story that people want to click on,” said Musk.

In fact, however, many in the media have fed and continue to feed the narrative that says autonomous cars are really safe. But those stories are usually about how safe autonomous cars will be. Bluntly, those cars simply don’t exist yet.

“[I]t’s really incredibly irresponsible of any journalists with integrity to write an article that would lead people to believe that autonomy is less safe. Because people might actually turn it off, and then die. So anyway, I’m really upset by this,” Musk continued.

Automobile’s view is that it is incredibly irresponsible to conflate the future state of autonomous cars with the present reality. It’s undoubtable that, once fully sorted and tied into vehicle-to-vehicle and vehicle-to-grid communications systems, self-driving cars will be substantially safer than most human driven ones. But that’s not the present. The present is an array of sensors and software designed to help a car assist—not replace—a human driver, and even then, only in limited circumstances.

The very name of Tesla’s Autopilot semi-autonomous driver assist system suggests the achievement of a goal that’s very much still a work in progress—a reality made obvious by the fact Tesla regularly pushes updates to its Autopilot software, adding or refining features. Not only are you subjecting Tesla owners (many of whom are more than willing to be Musk’s guinea pigs) to those tests, but there’s the additional and rather important point that anyone who shares the road with a Tesla on Autopilot that is not being used properly is also at risk. But from a marketing standpoint, “Not Quite Autopilot, So Watch Out!” as a name for the system doesn’t quite have the same ring to it.

Tesla at times has also shirked the responsibility it demands of the media. In its statement following the death of Walter Huang, a 38-year-old engineer at Apple after a crash in his Model X on Highway 101 near Mountain View, California, in March of this year, Tesla quickly pointed out that the condition of the highway barrier influenced the crash’s severity.

“The reason this crash was so severe is because the crash attenuator, a highway safety barrier which is designed to reduce the impact into a concrete lane divider, had been crushed in a prior accident without being replaced,” the Tesla statement read. OK, so the result of the crash was worse than it might have been, but that is a secondary issue for purposes of this discussion. The larger and first question is, why did it happen at all?

Since then, Tesla was removed from the investigation into what actually happened to Huang. Why? Because, as the National Transportation Safety Board (NTSB) announced in a statement, “Releases of incomplete information often lead to speculation and incorrect assumptions about the probable cause of a crash, which does a disservice to the investigative process and the traveling public.” Tesla publicly blamed Huang for the crash before the investigation had even truly begun, noting the Autopilot system had offered several warnings and that Huang’s hands weren’t on the steering wheel for six seconds preceding the crash. Certainly, drivers are not supposed to use Autopilot in such a manner. But after Robert L. Sumwalt, III, chairman of the NTSB, wrote a letter to Musk and Tesla explaining the carmaker’s ejection from the investigation, the circumstances and timeline make it clear just how far Tesla is willing to go to protect its image—to keep getting its version of “clicks.” Certainly not the same level of responsibility Musk expects from the media.

Nevertheless, Tesla justified its actions under the banner of transparency, all as it attempted to shift the blame from Autopilot to the person. Why? Because Level 5 autonomous cars don’t exist yet, not at Tesla or anywhere else, not even for cars equipped with its next-gen Enhanced Autopilot system. It’s posted right on Tesla’s website.

Of course, this is nothing new. After the first known fatality with Autopilot activated, a fatal crash in May of 2016 near Williston, Florida, the NTSB’s findings were damning:

● The Tesla’s automated vehicle control system was not designed to, and could not, identify the truck crossing the Tesla’s path or recognize the impending crash. Therefore, the system did not slow the car, the forward collision warning system did not provide an alert, and the automatic emergency braking did not activate.
● The Tesla driver’s pattern of use of the Autopilot system indicated an over-reliance on the automation and a lack of understanding of the system limitations.
● If automated vehicle control systems do not automatically restrict their own operation to conditions for which they were designed and are appropriate, the risk of driver misuse remains.
● The way in which the Tesla Autopilot system monitored and responded to the driver’s interaction with the steering wheel was not an effective method of ensuring driver engagement.
● Tesla made design changes to Autopilot following the crash. The change reduced the period of time before the system issues a warning/alert when the driver’s hands are off the steering wheel. The change also added a preferred road constraint to the alert timing sequence.

That last bullet point underscores the fact Tesla has been using its drivers and those who share the road with them as test subjects as it continues to develop its not-quite-self-driving Autopilot system. Since then, Tesla has continued to update Autopilot, and it says it has remedied the problems mentioned above as well as new ones that have cropped up. But it hasn’t solved all of them, and ultimately, that’s the reason for the disclaimer: Because Tesla doesn’t want to take responsibility for using the system the way it’s named, or the way Musk talks about it in quarterly earnings calls.

So tell us again, though, about how the media needs to write dutifully about the future of a technology that doesn’t yet exist in order to fulfill its responsibility to the public. Or how analyst inquiries into matters like capital expenditures projections and near-term capital requirements are irrelevant. And of course, no one should ever dare mention the 40 kWh Model S variants that were never built, or the boatloads of $35,000 Model 3s that will (probably) never see the light of day for the same reason: lack of profitability. Yes, surely, these questions are too dry and boneheaded.

Buying Guide
Powered by Motortrend