Tag Archives: avs

A Changing Vision for Driverless Vehicles

As plans for fully autonomous vehicles continue to get pushed back, the near future is beginning to look like it will revolve around a different acronym: more ADAS, less AV.

Autonomous vehicles, or AVs, will provide many of the technology breakthroughs that allow for advances in ADAS, or advanced driver-assistance systems, which will use a host of new sensors and AI to reduce accidents. But the vision of driverless robotaxis carrying us everywhere and making deliveries looks like it will have to wait a bit, except in carefully circumscribed areas — and maybe even there for a while yet.

The shift to ADAS from full AVs should soften the near-term effects on auto insurers, which have feared a loss of business in a world where individuals aren’t responsible for driving. At the same time, the shift may increase the cost of repairing expensive electronics when accidents occur.

The new focus on ADAS is by no means a statement that the full AV revolution won’t happen. The progress by AVs has been nothing short of astounding since DARPA, a research arm of the Department of Defense, offered a $1 million prize in 2004 in a contest among autonomous vehicles on a 150-mile course in the Mojave Desert. Most of the 15 vehicles chosen to participate were basically golf carts with sensors and computers strapped on to them, and more than half didn’t even make it out of sight of the starting line. The farthest any vehicle went was 7.4 miles. Just 17 years later, we have fleets of sleek-looking vehicles traveling city streets using AI and sensors — albeit still with a safety driver behind the wheel in just about all of them.

Progress will continue, too. A Brookings Institution study found that $80 billion flowed into AV technology investments between 2014 and 2017. That’s just the investments announced publicly and, of course, doesn’t count the prior investments or the money that has flooded into the field since 2017.

The issue hasn’t been that the AV technology doesn’t work — in any given situation, an AV will perform better than the vast majority of human drivers. It’s just that the world around AVs has turned out to be more complex than initial plans allowed for. In particular, we humans do lots of unpredictable things as pedestrians and as drivers — and AVs aren’t allowed to make mistakes.

While we wait for full autonomy, though, plenty of opportunities have opened up to make driving safer, a notion underscored by some recent multibillion-dollar price tags on acquisitions of ADAS companies.

Lidar sensors, governed by always-learning AI, can enhance automatic braking systems — and studies have found that cars are already more than 50% less likely to have a rear-end collision if equipped with such a system. Systems that keep cars centered in lanes will also improve as technology designed for full autonomy is deployed.

Increased communications capabilities designed for AVs will allow for better connections with roads and other infrastructure. When I rented a car last week while on vacation at the Jersey shore, I wasn’t sure what the speed limit was at one point, then realized that it was displayed on my dashboard based on some sort of radio signal from a speed limit sign I’d missed. Cars will also be able to better communicate with each other. If a car slams on its brakes, it will be able to alert the stream of cars behind it so they can instantaneously begin braking, too. Further out, AV technology will even let cars communicate with each other in ways that let them essentially see around corners — even if you can’t see that a car is speeding through a red light and might broadside you, many other cars on the road can, and they’ll be able to alert yours to brake and avoid the danger.

Technology developed for autonomous cars may also find earlier uses in autonomous trucks. Many are looking at having them operate in fully driverless mode on freeways, where vehicle traffic is far more predictable than on city roads and where pedestrians aren’t an issue. Human drivers would be staged at freeway exits, to ferry trucks to and from their final destinations and within cities. Makers of self-driving trucks say they can cut freight costs in half by removing the need for drivers on the freeway portion of long-haul routes.

I remain as optimistic as ever about the outlook for AVs. Since Chunka Mui and I wrote a book on driverless cars in 2013, progress was faster than we expected for a time and now is somewhat slower. As often happens with fundamental innovations like AVs, the development isn’t happening in a straight line. We’re winding up with hybrid forms of the technology in both cars and trucks before we get to the full effects. But we’ll get there.



Don’t Look Now, but Here Come Autonomous Trucks

While the focus for years has been on autonomous cars and on what they’ll do for safety, for auto insurance, for our lifestyles and more, a disruption is taking shape in the nearer term: autonomous trucks.

The fear factor has obscured that vision. While it is odd enough to drive down a street in Phoenix and see a Waymo minivan next to you without a driver, it’s hard to imagine anyone setting loose on a highway an 18-wheeler carrying 50,000 pounds without anyone at the wheel.

But we’re close.

If you can get the image of a hulking semi out of your mind, highway driving makes perfect sense. The issues that have slowed deployment of autonomous cars all relate to the vagaries of us humans. The technical problems related to snow, rain, fog, etc. have all been solved. But is that driver going to shove his way into the intersection even though he doesn’t have the right of way, or will he wait for the AV? What about that pedestrian walking against traffic? That bicyclist who seems confused? Does the ball that rolled into the street mean a little kid is about to follow it from behind the double-parked van? But highway driving takes away a huge number of the human variables — no pedestrians, no cyclists, far less merging with other vehicles.

Autonomous trucks can basically just get on a freeway and go straight until they need to get off. They solve a real problem, too. By law, truck drivers can only drive 11 out of every 24 hours. That means trucks, with valuable cargo, sit 13 out of every 24 hours. It also means that trucking companies are always short of drivers. But autonomous trucks would be able to go 24/7, cutting many trips in half and making trucking much more efficient.

The change would have broad implications, including for insurers that cover truck fleets and their cargo and for those that cover workers — some driver jobs would disappear, while others would morph to handle changes in loading and unloading, refueling and more. For instance, some drivers might become specialists in the “first mile” or “last mile,” taking an autonomous rig through complex city traffic out to the freeway or picking a rig up on the freeway and navigating it to its final destination, much as captains who know a harbor have long done with ocean-going cargo ships.

Change won’t happen overnight. Trucks still have to overcome that first-mile/last-mile problem — a high-profile startup shut down last year after trying to have drivers use virtual reality to take control of trucks whenever necessary. Autonomous trucks will also be more complicated mechanically than cars for the foreseeable future. While autonomous cars will all be fully electric, the batteries necessary to run trucks are so heavy that they cut into mileage too greatly, so autonomous trucks will not only need to have enough battery power to run all the sensors and computers but will still require an internal combustion engine.

Still, workarounds are developing, and autonomous trucks are making great progress. For instance, Locomation offers what it calls autonomous relay convoys, which combine two human drivers and two autonomous vehicles. Each human drives a rig to the freeway, where one then takes charge of driving. The other’s rig switches to autonomous mode and follows the lead rig, while the human in the trailing vehicle rests. When the first driver’s 11 hours are up, the rested driver takes over. Whenever the trucks need to split up to go their individual destinations, the respective drivers simply take control.

These sorts of convoys would barely make a dent in the potential of autonomous vehicles, but they do solve a real problem, and they provide a start for the long adoption curve ahead of us.

While the idea of having 50,000 pounds barreling down the freeway without a human at the wheel may still be intimidating, think of it this way: Truck drivers are up so high up that you rarely notice them, unless they’ve done something aggressive and you’re looking for the driver because he’s ticked you off. And AVs are by nature so cautious that you’d almost never try to stare down their driver. So you may not even notice the transition to automated trucks.



Who Is Liable When a Driverless Car Crashes?

Now that truly autonomous vehicles (AVs) are starting to appear on roads, the insurance industry will be called on to perform its usual role as an enabler of innovation: Insurers will quantify the risks and likely cover much of it.

But how should insurers think about the liability for AVs? Will legislatures specify who is responsible for which problems? Will regulators? Will the courts? What principles will guide the decision makers? Where will liability fall?

Using history as a guide, it’s possible to make reasonable guesses at some of the answers.

An interesting analysis in Fortune argues that the courts will set the rules, applying long-standing principles to try to sort through the issues in the new environment.

The process will thus be messy, and some of the arguments made in court will initially be idiosyncratic. The article notes that, in the 1930s and 1940s, people who were hit by hired taxis sometimes sued the passengers rather than the driver or the driver’s employer. That approach never got traction in the courts and seems silly today, but you can be sure that some similarly odd-sounding theories will be tried in AV cases before being discarded.

The article argues that clear principles will gradually emerge. One is obvious: that the manufacturer will be responsible for a clear error, the software equivalent of having a tire fall off a car. But the two other standards were more subtle:

–A court will ask whether the AV performed better than a competent, average driver. That question may not apply just to the circumstances of the accident and the specific system or component that may have been involved in causing a collision but may also be a general question about the performance of the AV versus a human driver. The U.S. National Highway Traffic Safety Administration made that sort of general assessment of safety when it cleared Tesla’s Autopilot system of responsibility for a fatal crash in 2016. The temptation, of course, will be to compare an AV with a perfect driver — aren’t computers supposed to be free of error? Instead, the NHTSA is taking the position that anything that raises the average competence is a societal good. And a comparison to an average driver would be good news for the manufacturers of AVs and for those that insure them.

–The court will also ask whether an AV performed better than an AV did previously in a similar situation. A key promise of AVs is that they are always learning, and not just from an individual car’s experience but from what has happened to every car in the fleet. So, courts will hold manufacturers responsible for not making the same mistake twice.

The potential revenue for insurers from AVs is enormous. A recent report from Accenture and the Stevens Institute of Technology estimates that, even as AVs slash premium for personal auto coverage, product liability will be one of three new revenue streams that will generate $81 billion in premium between now and 2025. (The other two opportunities are in the new cyber risks that come along with AVs and in the potential liabilities associated with the infrastructure that will support AVs.)

The law will take shape slowly. It always does. There will be surprises along the way. There always are. But the size of the product liability opportunity, plus the beginnings of answers on legal principles, suggests that insurers should start working now to be prepared as the opportunity unfolds.

Stay safe.


P.S. Here are the six articles I’d like to highlight from the past week:

OnStar: Next Step for OEM Partnerships

Insurers hope to create a new way to collect driving data that’s easier for the driver than installing a device or downloading an app.

COVID-19 Is No Black Swan

There were clear warnings about COVID from credible institutions. The real issue is how we are going to deal with “grey rhinos.”

ESG: Doing Well by Doing Good

Insurance is at the forefront of the environmental, social and governance movement, which may usher in a Second Age of Enlightenment.

P&C Claims: 4 Themes for the Future

The extraordinary events of 2020 have accelerated four themes: automating operations; AI for insight; augmenting experts; and new ecosystems.

Advice to Early-Stage Startups on Pricing

Your pricing is a marketing tool that announces how you want potential clients to think of your offering.

How AI Transforms Risk Engineering

“AI could contribute to the global economy by 2030, more than the current output of China and India combined.”

15 Hurdles to Scaling for Driverless (Part 3)

This is the third part of a three-part series. You can finds part 1 here and part 2 here.

Successful industrialization of driverless cars will depend on getting over many significant hurdles. Failure only requires getting tripped up by a few of them. In part two of this series, I outlined seven key hurdles to industrial-size scaling of driverless cars. Overcoming hurdles to scaling is not enough, however.

In this concluding article, I explore the challenges to broader market acceptance. I outline eight additional hurdles related to trust, market viability and managing secondary effects. All must be overcome for driverless cars to truly revolutionize transportation.

Trust. It is not enough for developers and manufacturers to believe their AVs are good enough for widespread use, they must convince others, too. To do so, they must overcome three huge hurdles:

8. Independent verification and validation. To date, developers have kept their development processes rather opaque. They’ve shared little detail about their requirements, specifications, design or testing. An independent, systematic process is needed to verify and validate developers’ claims of their AVs’ efficacy. Many are likely to demand this, including policy makers, regulators, insurers, investors, the public at large and, of course, customers. The best developers should embrace this—it would limit liability and distinguish them from laggards and lower-quality copycats.

9. Standardization and regulation. Industry standards and government regulation cover almost every aspect of cars today. Industrialization of driverless cars will require significant doses of both, too. Standards, especially those enforced by government regulation, ensure reliability, compatibility, interoperability and economies of scale. They also increase public safety and reduce provider liability.

10. Public acceptance. Most new products take hold by attracting early adopters. The lessons and resources from that initial success help developers “cross the chasm” to mainstream success. The industrialization of AVs will depend on much earlier and broader public acceptance. AVs affect not only the early-adopting customers inside them, but also every non-customer on and near the roads those AVs travel. Without widespread acceptance—including by those who would not choose to ride in the AVs—industrialization is not likely to be allowed.

See also: Where Are Driverless Cars Taking Industry?  

Market Viability. The next three hurdles deal with whether AV-enabled business models work in the short term and the long term, both in beating the competition and other opponents.

11. Business viability. Analyses of AV TaaS business models are generally optimistic about the possibility of providing service for much less than the cost of human-driven services or personal car ownership. Current cost-per-mile estimates are nowhere near long-term targets, however. Most players are also underestimating the cost to scale. It remains to be seen whether rosy market plans will survive contact with the marketplace.

12. Stakeholder resistance. As the old saying goes, one person’s savings is another’s lost revenue. The industrialization of driverless cars will require overcoming the resistance of a large host of potential losers, including regulators, car dealers, insurers, personal injury lawyers, oil companies, truck drivers and transit unions. This will not be easy, as the potential losers include some of the most influential policy shapers at federal, state and local levels.

13. Private ownership. AV TaaS services are only a waypoint on the path to transformation of the private ownership market. If AVs are to revolutionize transportation, they will have to appeal to consumers who have long preferred to own their own cars. Privately owned cars account for the vast majority of all cars and all miles driven.

Secondary Effects. Technology always bites back. The industrialization of AVs could induce huge negative secondary effects. Most will unfold slowly, but two consequences are already concerning and must be addressed as part of the industrialization process.

14. Congestion. Faster, cheaper and better transportation will deliver greater economic opportunity and quality of life—especially for those who might otherwise not have access to it, like the poor, handicapped and elderly. But, it might also cause a surge in congestion by driving up the number of vehicles and vehicle miles traveled. This happened with ride-hail services, including Uber and Lyft. According to a recent study by the San Francisco County Transportation Authority, for example, congestion in the densest parts of San Francisco increased by as much as 73% between 2010 and 2016. The ride-hail services collectively accounted for more than half of the increase in daily vehicle hours of delay.

15. Job loss. Some argue that the history of technology, including transportation technology, shows that new services will create more jobs, not less. Few argue, however, that the new jobs go to those who lost the old ones. There’s no getting around the fact that every AV Uber means one less human Uber driver—even if other jobs are created for engineers, maintainers, dispatchers, customer service reps, etc. The same holds true for AV shuttles, buses, trucks and so on. Early AV TaaS providers will operate under an intense spotlight on this issue. Providers will have to anticipate and ameliorate potential public and regulator backlash on job loss.

* * *

There’s an old saying in Silicon Valley that one should never mistake a clear view for a short distance. The revolutionary potential of AVs is clear. Yet, we are still far from the widespread adoption needed to realize their benefits.

Don’t mistake a long distance for an unattainable goal, though. As a close observer, I am enthusiastic (and pleasantly surprised) by the progress that has been made on AV technology. Leading developers like Waymo, GM Cruise, nuTonomy and their diaspora have raced to build AVs and progressed faster than many, just a few years ago, thought possible.

See also: Driverless Cars and the ’90-90 Rule’  

Industrialization is a marathon, not a sprint. It depends on overcoming many hurdles, including the 15 I’ve laid out. The challenges of doing so are great—likely greater than many current players (and their investors) perceive and are positioned to address. New strategies are needed. A shakeout is likely.

That’s how innovation and market disruption work. That is why most contenders fail and why outsized rewards go to those who succeed. Whoever thought that a phone maker or a search engine company could be worth a trillion dollars? Is it outlandish to believe, as I still do, that driverless cars would be worth multiple trillions?

Driverless Cars and the ’90-90 Rule’

In programming circles, there is an aphorism known as the “90-90 rule.” It states that the first 90% of code accounts for the first 90% of the expected development time—and the remaining 10% of code takes another 90% of time. The rule is a tongue-in-cheek acknowledgement that technology projects always take longer than you expect, even when you know that they are going to take longer than you expect.

Sacha Arnoud, director of engineering at Waymo, recently used a variant of the 90-90 rule to characterize Waymo’s self-driving car program. Waymo’s experience, he said, was that the first 90% of the technology took only 10% of the time. To finish the last 10%, however, is requiring 10x the initial effort.

Arnoud’s remarks were given at a guest lecture at Lex Fridman’s MIT class on “Deep Learning for Self-Driving Cars.” He offered technical insights on the history of the Waymo program, how it is applying artificial intelligence and deep learning and how it is moving from demo to industrial-strength product.

The Waymo engineer’s lecture goes beyond most Waymo management presentations and press events. He provides vivid details on the complexity of the effort to date and insight on challenges to come—both for Waymo and for those trying to catch up to its pioneering efforts.

Here are 5 takeaways, though I recommend watching the entire presentation.

1. Industrialization requires 10x the effort.

Arnoud emphasized the large amount of work needed to go from a demo that works in a lab to an industrialized product that is safe to put on the road: “You need to 10x the capabilities of your technology. You need to 10x your team size, including finding effective ways for more engineers and more researchers to collaborate. You need to 10x the capabilities of your sensors. You need to 10x the overall quality of the system, including your testing practices.”

2. Deep learning enabled algorithmic breakthroughs.

Arnoud noted that deep learning techniques were much less advanced in 2010 when Google started its work on self-driving cars. But, in the years since, deep learning has advanced to enable algorithmic breakthroughs in several critical areas for autonomous driving, including mapping, perception and scene understanding.

Arnoud gave numerous examples, such as using deep learning to analyze street imagery to extract street names, house numbers, traffic lights and traffic signs. The ability to precompute such data and store them as maps in the car saves precious onboard computing power for real time tasks.

See also: When Will the Driverless Car Arrive?  

Deep learning is driving breakthroughs in real-time tasks as well, such as analyzing sensor data to identify traffic signals, other vehicles, obstacles, pedestrians, and so on. Deep learning capabilities also help in anticipating possible behavior of other drivers, cyclists and pedestrians, and driving accordingly.

3. Synergy with other Google units is key to Waymo’s progress.

Arnoud acknowledged the importance of Google’s “whole machine learning ecosystem” to Waymo’s progress. This includes the seminal software advances by the Google Brain team and on-going collaboration with other Google teams working on deep learning at scale, such as in vision, speech, natural language processing and maps. The Google ecosystem also provides specialized infrastructure and tools for machine learning. This includes accelerators, data centers, labelled datasets and research that support Google’s TensorFlow programming paradigm.

4. Waymo’s testing program might be its secret sauce.

Arnoud emphasized that however great Waymo’s algorithms, sensors and overall package might be, driverless cars are still complex, embedded, real-time robotic systems that must work safely with imperfect data in an unpredictable world. He highlighted Waymo’s three-prong testing program of real-world driving, simulation and structured testing as key to iterating on and productizing the technology.

Much is made of the millions public-road miles that Waymo’s cars have driven autonomously. Arnoud described this as the equivalent of about 300 years of human driving experience and 160 times around the globe. Real world driving is critical, he said, but what is more important is the ability to simulate.

Simulation is critical because it allows for Waymo to test each new iteration of software against all previously-driven miles. Even more important is the ability to test against “fuzzed” versions of those millions of miles, such as seeing how the software would handle cars going at slightly different speeds, an extra car, pedestrians crossing in front of the car and so on. Arnoud described Waymo’s simulation-based testing capability as the equivalent of 25,000 virtual cars driving 2.5 billion real and modified miles in 2017.

The third component of Waymo’s testing program is its structured testing program. Arnoud said that there is a “long tail” of driving situations that happen very rarely. Rather than trying to encounter every possibility in real-world driving, Waymo set up a 90-acre mock city at the decommissioned Castle Air Force base where it can test its cars against such edge cases. These tests are then fed into the simulation engine and fuzzed to create variations for more testing.

5. Waymo’s next steps are big (and hard) ones.

Arnoud closed with a discussion of the engineering challenges in front of Waymo. He described two big next steps.

One next step is expanding the “operational design domains” (ODD) of the cars. This includes expanding into “dense urban cores,” such as San Francisco (in which Waymo recently announced it is expanding its testing program). The other ODD was additional weather conditions, such as hard rain, snow and fog. (Waymo CEO John Krafcik recently told an audience that he was “jumping up and down” recently when it snowed 12 inches near Detroit, because it would enable Waymo’s testing in snow.)

See also: 7 Steps for Inventing the Future

The other area of focus was what Arnoud called “semantic understanding.” As an example, he pointed to the chaotic Place de l’Étoile traffic circle around the Arc de Triomphe in Paris. The circle is a meeting point of 12 roads and notoriously difficult to navigate. Arnoud says he has driven it many times without incident, however, and that such situations require a lot more than perception and vehicle operating skills. They require deep understanding of local rules and expectations. They also require constant communication and coordination with other drivers, including signals, gestures and so on. This kind of deep reasoning is key to numerous edge cases and improving the general abilities of driverless cars.

* * *

While Waymo has clearly made tremendous progress towards the driverless future, Arnoud closed his presentation by emphasizing the engineering infrastructure and the complexities of scaling that have to be addressed in order to turn driverless cars into safe production systems.

How far along is Waymo in the last 90% of that industrialization process? Arnoud never said. But, to put a point on the complexities, he showed a closing video of a Waymo car stopped at an intersection as a gaggle of kids bounced on frogger sticks across the street on all sides of the car. Some things are waiting for, he seemed to imply.