Self-Driven to Distraction – Smerconish.com

Self-Driven to Distraction – Annie Duke for Smerconish.com


In late March, there were two tragic auto accidents, five days apart, both involving autonomous cars. In Tempe, Arizona, on the night of March 18, an autonomous Uber vehicle hit and killed a pedestrian pushing a bicycle outside the crossing lines. During the morning of March 23, on a highway near Mountain View, California, a Tesla X in Autopilot mode, with its owner in the driver’s seat, collided with a roadside barrier, killing the driver.

The two accidents have already had an impact on continued testing of autonomous-driving technology. Uber immediately announced it was suspending testing of its self-driving cars in Tempe and in three other cities. Toyota and Nvidia also decided to temporarily suspend testing self-driving technology on public roads.

The massive media coverage of the accidents, especially the Arizona pedestrian death (believed to be the first involving an autonomous vehicle), has been accompanied by a presumption that if someone dies in an accident involving a driverless car, the technology is not safe enough.

But what does “safe enough” mean?

Much of the coverage, such as this piece from The New York Times and this piece from The Washington Post and this piece from USA Today, has omitted what seems to be the relevant comparison: how does the safety of driverless vehicles compare to the safety of human-driven vehicles? In fact, the dangers of human drivers have barely been mentioned in the news coverage evaluating self-driving technology following the two accidents.

Senator Richard Blumenthal was quoted in the New York Times on March 19 saying, “This tragic incident makes clear that autonomous vehicle technology has a long way to go before it is truly safe for the passengers, pedestrians, and drivers who share America’s roads.”

A week after Uber grounded its self-driving cars, Governor Doug Ducey ordered Uber to suspend testing in Arizona. In a letter to Uber’s CEO, Ducey said he expected that public safety should be the top priority for operators of this technology.

“The incident that took place on March 18 is an unquestionable failure to comply with this expectation.” Specifically, he found the police video leading up to the accident “to be disturbing and alarming, and it raises many questions about the ability of Uber to continue testing in Arizona.”

Neither Blumenthal nor Ducey mentioned the safety comparison to the current status quo technology on the road: cars driven by humans.

The reaction to these accidents raises the question: Why are we so tolerant, by comparison, of human-driven auto fatalities, so much so that those deaths don’t seem to even come up in the safety discussion about autonomous vehicles?

The first pedestrian death from a self-driven car occurred on March 18, 2018. Based on the 6,000 pedestrians killed per year by human-driven cars (according to NHTSA’s final 2016 data), there were probably 15 other pedestrian deaths that day and the day before and the day before that and so on. Vehicular accidents in general are responsible for over 3,000,000 deaths in America (more American deaths than in wars during that period) since the introduction of human-driven cars.

Yet we reacted to the two fatalities involving self-driven vehicles by pulling the vehicles off the road and some innovators of this technology are suspending or postponing tests. Arizona, which worked hard to wrest the testing of these vehicles from California, has now reversed its course after the fatality.

A survey of California voters about self-driving cars found, by a two-to-one margin, Californians were saying, “not in my neighborhood.” (And that survey was conducted the day before the California accident.)

The death of anyone in a vehicular accident is a tragedy. But why do we seem to have so much tolerance for vehicular accidents in general but not accidents involving these new technologies?

Innovation is part of the answer.

First, when something goes wrong following an innovative choice, it becomes a focus of news coverage, which is a measure people use in estimating the frequency of something happening. The more news coverage, the more we inflate the danger.

Second, the status quo choice (human-driven cars) has risks that we overlook because those risks are already baked into the decision to drive. Since the beginning of the 20th century, there has been a consensus around the choice of transportation by human-driven cars. There’s a tradeoff, but we don’t think much about it because we’ve already concluded the tradeoff is worth it.

That makes it very difficult to make a rational comparison between the status-quo choice and the innovative choice. If we’re not careful, we overrate the cost of the innovative choice (because of its amplified publicity), and we underrate the cost of the status-quo choice because we don’t dwell on that cost once we’ve reached consensus.

The effect? We tolerate bad outcomes from status-quo choices, but not for innovative choices. And rather than analyze whether the technologies are statistically safer than human-driven vehicles, the bad outcome creates a presumption that the choice to put self-driving cars on the road must be a bad one.

This bias, equating the outcome quality with the decision quality, is called resulting. Resulting is a particularly strong bias when interpreting outcomes from innovative choices and, because of this tendency, resulting is one of the single biggest contributors to slowing the pace of innovation.

Resulting is particularly exacerbated when a decision, strategy, or technology is new because it feels like we understand the status quo choice, but we don’t really don’t really understand the innovative choice, especially when the innovation involves “choices” made by technology rather than humans. We know how a human being thinks and makes decisions about driving. But a driverless vehicle is a black box. We don’t understand how it works, and we can’t assign intentions to it. We don’t understand how it makes choices so when an outcome is bad we link that outcome to the technology being bad. We react against that technology, instead of taking a step back and asking whether these cars are safe comparatively to status quo technology.

So how do self-driving cars compare statistically?

We know that human-driven vehicles are responsible for nearly 6,000 pedestrian deaths per year. Likewise, we know that 31,000 vehicle occupants die annually in human-driven vehicle accidents. A reasonable analysis would compare the accidents rates per miles-driven of both kinds of vehicles.

Tesla, in statements on March 27 and March 30, claimed that its Autopilot technology had a previously perfect safety record on the stretch of road where the California accident occurred and a favorable record for comparative safety versus human-driven cars. Drivers of its cars “have driven the same stretch of highway with Autopilot engaged roughly 85,000 times since Autopilot was first rolled out in 2015 and roughly 20,000 times since just the beginning of the year, and there has never been an accident that we know of.” It also cited a NHTSA Office of Defects Investigation calculation of airbag deployments, finding “that Autopilot reduces crash rates by 40%.” Tesla reminded us, in one of its statements, that there is one fatality for every 320 million miles in vehicles equipped with Tesla’s Autopilot, compared with one fatality for every 86 million miles for all vehicles in the U.S.

These statistics may not resolve the question of which form of driving is safer, but it starts to raise the question in the proper way. It could be that Tesla’s numbers on relative safety shouldn’t be uncritically accepted, but neither should such comparisons be ignored. The focus should be on making a rational comparison, and avoiding the possibility of an overreaction to a conspicuous, tragic event.

There was a form of that overreaction following the September 11 terrorist attacks. Air travel wasn’t an innovative technology, but the terrorists’ use of airplanes as weapons was new. The amplified coverage of that tragedy convinced sufficient numbers of people that air travel was too dangerous causing more people to drive instead of fly, even though flying is statistically much safer than driving. This led to a spike in traffic fatalities.

Left to our own devices, we are sometimes incapable of comparing alternatives even when we see them with our own eyes. Much of the coverage of the Uber accident has pointed out that the video of the accident shows that the self-driving system didn’t detect the pedestrian in the road. If you watch the video, though, it seems the pedestrian was essentially invisible to our human eyes as well as the autonomous system. Undeniably, if or when self-driving cars can spot things human can’t, it will be a fantastic benefit for the technology. We shouldn’t discourage development of such technology because it can miss things human drivers can also miss.

We need to be careful about doing the right kind of analysis, and not just fending off intense coverage of two tragic accidents. We should be thinking about proper comparisons for the purposes of public policy. Otherwise, innovation is going to suffer.


Originally published on Smerconish.com