When Chris Urmson talks about driverless cars, everyone should listen. This has been true throughout his career, but it is especially true now.
Few have had better vantage points on the state of the art and the practical business and engineering challenges of building driverless cars. Urmson has been at the forefront for more than a decade, first as a
” lecture at Carnegie Mellon was particularly interesting because he has had time to absorb the lessons from his long tenure at Google and translate those into his next moves at Aurora. He was also in a thoughtful space at his alma mater, surrounded by mentors, colleagues and students. And, it is early enough in his new startup’s journey that he seemed truly in “perspective” rather than “pitch” mode.
is worth watching. Here are six takeaways:
Much of the carnage due to vehicle accidents
is easy to measure. In 2015, in just the U.S., there were 35,092 killed and 2.4 million injured in 6.3 million police-reported vehicle accidents. Urmson estimates, however, that the real accident rate is really between two and 10 times greater.
Over more than two million test miles during his Google tenure, Google’s SDCs were involved in about 25 accidents. Most were not severe enough to warrant a regular police report (they were reported to the California DMV). The accidents mostly looked like this: “Self-driving car does something reasonable. Comes to a stop. Human crashes into it.” Fender bender results.
While we talk a lot about fatalities or police-reported accidents, Urmson said, “there is a lot of property damage and loss that can be cleaned up relatively easily” with driverless technology.
The choices made by driverless cars are critically dependent on understanding and matching the expectations of human drivers. This includes both humans in operational control of the cars themselves and human drivers of other cars. For Urmson, the difficulty in doing this is “the heart of the problem” going forward.
To illustrate the “human factors” challenge, Urmson dissected three high-profile accidents. (He cautioned that, in the case of the Uber and Tesla crashes, he had no inside information and was piecing together what probably happened based on public information.)
[caption id="attachment_25868" align="alignnone" width="530"]
Google Car Crashes With Bus; Santa Clara Transportation Authority[/caption]
In the only accident where Google’s SDC was partially at fault
, Google’s car was partially blocking the lane of a bus behind it (due to sand bags in its own lane). The car had to decide whether to wait for the bus to pass or merge fully into the lane. The car predicted that the remaining space in the bus’s lane was too narrow and that the bus driver would have to stop. The bus driver looked at the situation and thought “I can make it,” and didn’t stop. The car went. The bus did, too. Crunch.
Uber's Arizona Rollover
[caption id="attachment_25869" align="alignnone" width="530"]
Uber Driverless Car Crashes In Tempe, AZ[/caption]
The Uber SDC was in the leftmost lane of three lanes. The traffic in the two lanes to its right were stopped due to congested traffic. The Uber car’s lane was clear, so it continued to move at a good pace.
A human driver wanted to turn left across the three lanes. The turning car pulled out in front of the cars in the two stopped lanes. The driver probably could not see across the blocked lanes to the Uber car’s lane and, given the stopped traffic, expected that whatever might be driving down that lane would be moving slower. It pulled into the Uber car’s lane to make the turn, and the result was a sideways parked car.
See also: Who Is Leading in Driverless Cars?
Tesla's Deadly Florida Crash
[caption id="attachment_25870" align="alignnone" width="530"]
Tesla Car After Fatal Crash in Florida[/caption]
The driver had been using Tesla’s Autopilot for a long time, and he trusted it—despite Tesla saying, “Don’t trust it.” Tesla user manuals told drivers to keep their hands on the wheel, eyes in front, etc. The vehicle was expecting that the driver was paying attention and would act as the safety check. The driver thought that Autopilot worked well enough on its own. A big truck pulled in front of the car. Autopilot did not see it. The driver did not intervene. Fatal crash.
Tesla, to its credit, has made modifications to improve the car’s understanding about whether the driver is paying attention. To Urmson, however, the crash highlights the fundamental limitation of relying on human attentiveness as the safety mechanism against car inadequacies.
3. Incremental driver assistance systems will not evolve into driverless cars.
Urmson characterized “one of the big open debates” in the driverless car world as between Tesla's (and other automakers’) vs. Google’s approach. The former’s approach is “let’s just keep on making incremental systems and, one day, we’ll turn around and have a self-driving car." The latter is “No, no, these are two distinct problems. We need to apply different technologies.”
Urmson is still “fundamentally in the Google camp.” He believes there is a discrete step in the design space when you have to turn your back on human intervention and trust the car will not have anyone to take control. The incremental approach, he argues, will guide developers down a selection of technologies that will limit the ability to bridge over to fully driverless capabilities.
4. Don’t let the “Trolley Car Problem” make the perfect into the enemy of the great.
The “trolley car problem” is a thought experiment that asks how driverless cars should handle no-win, life-threatening scenarios—such as when the only possible choices are between killing the car’s passenger or an innocent bystander. Some argue that driverless cars should not be allowed to make such decisions.
Urmson, on the other hand, described this as an interesting philosophical problem that should not be driving the question of whether to bring the technology to market. To let it do so would be “to let the perfect be the enemy of the great.”
Urmson offered a two-fold pragmatic approach to this ethical dilemma. First, cars should never get into such situations. “If you got there, you’ve screwed up.” Driverless cars should be conservative, safety-first drivers that can anticipate and avoid such situations. “If you’re paying attention, they don’t just surprise and pop out at you,” he said. Second, if the eventuality arose, a car’s response should be predetermined and explicit. Tell consumers what to expect and let them make the choice. For example, tell consumers that the car will prefer the safety of pedestrians and will put passengers at risk to protect pedestrians. Such an explicit choice is better than what occurs with human drivers, Urmson argues, who react instinctually because there is not enough time to make any judgment at all.
5. The “mad rush” is justified.
Urmson reminisced about the early days when he would talk to automakers and tier 1 suppliers about the Google program and he “literally got laughed at.” A lot has changed in the last five years, and many of those skeptics have since invested billions in competing approaches.
Urmson points to the interaction between automation, environmental standards, electric vehicles and ride sharing as the driving forces behind the rush toward driverless. (Read more about this virtuous cycle
.) Is it justified? He thinks so, and points to one simple equation to support his position:
3 Trillion VMT * $0.10 per mile = $300B per year
In 2016, vehicles in the U.S. traveled about 3.2 trillion miles
. If you could bring technology to bear to reduce the cost or increase the quality of those miles and charge 10 cents per mile, that would add up to $300 billion in annual revenue—just in the U.S.
This equation, he points out, is driving the market infatuation with Transportation as a Service (TaaS) business models. The leading contenders in the emerging space, Uber, Lyft and Didi, have a combined market valuation of about $110 billion—roughly equal to the market value of GM, Ford and Chrysler. Urmson predicts that one of these clusters will see its market value double in the next four years. The race is to see who reaps this increased value.
See also: 10 Questions That Reveal AI’s Limits
6. Deployment will happen “relatively quickly.”
To the inevitable question of “when,” Urmson is very optimistic. He predicts that self-driving car services will be available in certain communities within the next five years.
You won’t get them everywhere. You certainly are not going to get them in incredibly challenging weather or incredibly challenging cultural regions. But, you’ll see neighborhoods and communities where you’ll be able to call a car, get in it, and it will take you where you want to go.
(Based on recent Waymo announcements, Phoenix seems a likely candidate
Then, over the next 20 years, Urmson believes we’ll see a large portion of the transportation infrastructure move over to automation.
Urmson concluded his presentation by calling it an exciting time for roboticists. “It’s a pretty damn good time to be alive. We’re seeing fundamental transformations to the structure of labor and the structure transportation. To be a part of that and have a chance to be involved in it is exciting.”