Tag Archives: mui

When Will the Driverless Car Arrive?

When Chris Urmson talks about driverless cars, everyone should listen. This has been true throughout his career, but it is especially true now.

Few have had better vantage points on the state of the art and the practical business and engineering challenges of building driverless cars. Urmson has been at the forefront for more than a decade, first as a leading researcher at CMU, then as longtime director of Google’s self-driving car (SDC) program and now as CEO of a driverless car dream team at Aurora Innovation.

Urmson’s recent “Perspectives on Self-Driving Cars” lecture at Carnegie Mellon was particularly interesting because he has had time to absorb the lessons from his long tenure at Google and translate those into his next moves at Aurora. He was also in a thoughtful space at his alma mater, surrounded by mentors, colleagues and students. And, it is early enough in his new startup’s journey that he seemed truly in “perspective” rather than “pitch” mode.

The entire presentation is worth watching. Here are six takeaways:

1. There is a lot more chaos on the road than most recognize.

Much of the carnage due to vehicle accidents is easy to measure. In 2015, in just the U.S., there were 35,092 killed and 2.4 million injured in 6.3 million police-reported vehicle accidents. Urmson estimates, however, that the real accident rate is really between two and 10 times greater.
Over more than two million test miles during his Google tenure, Google’s SDCs were involved in about 25 accidents. Most were not severe enough to warrant a regular police report (they were reported to the California DMV). The accidents mostly looked like this: “Self-driving car does something reasonable. Comes to a stop. Human crashes into it.” Fender bender results.
While we talk a lot about fatalities or police-reported accidents, Urmson said, “there is a lot of property damage and loss that can be cleaned up relatively easily” with driverless technology.
2. Human intent is the fundamental challenge for driverless cars.
The choices made by driverless cars are critically dependent on understanding and matching the expectations of human drivers. This includes both humans in operational control of the cars themselves and human drivers of other cars. For Urmson, the difficulty in doing this is “the heart of the problem” going forward.
To illustrate the “human factors” challenge, Urmson dissected three high-profile accidents. (He cautioned that, in the case of the Uber and Tesla crashes, he had no inside information and was piecing together what probably happened based on public information.)

Google Car Crashes With Bus; Santa Clara Transportation Authority

In the only accident where Google’s SDC was partially at fault, Google’s car was partially blocking the lane of a bus behind it (due to sand bags in its own lane). The car had to decide whether to wait for the bus to pass or merge fully into the lane. The car predicted that the remaining space in the bus’s lane was too narrow and that the bus driver would have to stop. The bus driver looked at the situation and thought “I can make it,” and didn’t stop. The car went. The bus did, too. Crunch.

Uber’s Arizona Rollover

Uber Driverless Car Crashes In Tempe, AZ

The Uber SDC was in the leftmost lane of three lanes. The traffic in the two lanes to its right were stopped due to congested traffic. The Uber car’s lane was clear, so it continued to move at a good pace.

A human driver wanted to turn left across the three lanes. The turning car pulled out in front of the cars in the two stopped lanes. The driver probably could not see across the blocked lanes to the Uber car’s lane and, given the stopped traffic, expected that whatever might be driving down that lane would be moving slower. It pulled into the Uber car’s lane to make the turn, and the result was a sideways parked car.

See also: Who Is Leading in Driverless Cars?  

Tesla’s Deadly Florida Crash

Tesla Car After Fatal Crash in Florida

The driver had been using Tesla’s Autopilot for a long time, and he trusted it—despite Tesla saying, “Don’t trust it.” Tesla user manuals told drivers to keep their hands on the wheel, eyes in front, etc. The vehicle was expecting that the driver was paying attention and would act as the safety check. The driver thought that Autopilot worked well enough on its own. A big truck pulled in front of the car. Autopilot did not see it. The driver did not intervene. Fatal crash.

Tesla, to its credit, has made modifications to improve the car’s understanding about whether the driver is paying attention. To Urmson, however, the crash highlights the fundamental limitation of relying on human attentiveness as the safety mechanism against car inadequacies.

3. Incremental driver assistance systems will not evolve into driverless cars.

Urmson characterized “one of the big open debates” in the driverless car world as between Tesla’s (and other automakers’) vs. Google’s approach. The former’s approach is “let’s just keep on making incremental systems and, one day, we’ll turn around and have a self-driving car.” The latter is “No, no, these are two distinct problems. We need to apply different technologies.”

Urmson is still “fundamentally in the Google camp.” He believes there is a discrete step in the design space when you have to turn your back on human intervention and trust the car will not have anyone to take control. The incremental approach, he argues, will guide developers down a selection of technologies that will limit the ability to bridge over to fully driverless capabilities.

4. Don’t let the “Trolley Car Problem” make the perfect into the enemy of the great.

The “trolley car problem” is a thought experiment that asks how driverless cars should handle no-win, life-threatening scenarios—such as when the only possible choices are between killing the car’s passenger or an innocent bystander. Some argue that driverless cars should not be allowed to make such decisions.

Urmson, on the other hand, described this as an interesting philosophical problem that should not be driving the question of whether to bring the technology to market. To let it do so would be “to let the perfect be the enemy of the great.”

Urmson offered a two-fold pragmatic approach to this ethical dilemma. First, cars should never get into such situations. “If you got there, you’ve screwed up.”  Driverless cars should be conservative, safety-first drivers that can anticipate and avoid such situations. “If you’re paying attention, they don’t just surprise and pop out at you,” he said. Second, if the eventuality arose, a car’s response should be predetermined and explicit. Tell consumers what to expect and let them make the choice. For example, tell consumers that the car will prefer the safety of pedestrians and will put passengers at risk to protect pedestrians. Such an explicit choice is better than what occurs with human drivers, Urmson argues, who react instinctually because there is not enough time to make any judgment at all.

5. The “mad rush” is justified.

Urmson reminisced about the early days when he would talk to automakers and tier 1 suppliers about the Google program and he “literally got laughed at.”  A lot has changed in the last five years, and many of those skeptics have since invested billions in competing approaches.

Urmson points to the interaction between automation, environmental standards, electric vehicles and ride sharing as the driving forces behind the rush toward driverless. (Read more about this virtuous cycle.) Is it justified? He thinks so, and points to one simple equation to support his position:

3 Trillion VMT * $0.10 per mile = $300B per year

In 2016, vehicles in the U.S. traveled about 3.2 trillion miles. If you could bring technology to bear to reduce the cost or increase the quality of those miles and charge 10 cents per mile, that would add up to $300 billion in annual revenue—just in the U.S.

This equation, he points out, is driving the market infatuation with Transportation as a Service (TaaS) business models. The leading contenders in the emerging space, Uber, Lyft and Didi, have a combined market valuation of about $110 billion—roughly equal to the market value of GM, Ford and Chrysler. Urmson predicts that one of these clusters will see its market value double in the next four years. The race is to see who reaps this increased value.

See also: 10 Questions That Reveal AI’s Limits  

6. Deployment will happen “relatively quickly.”

To the inevitable question of “when,” Urmson is very optimistic.  He predicts that self-driving car services will be available in certain communities within the next five years.

You won’t get them everywhere. You certainly are not going to get them in incredibly challenging weather or incredibly challenging cultural regions. But, you’ll see neighborhoods and communities where you’ll be able to call a car, get in it, and it will take you where you want to go.

(Based on recent Waymo announcements, Phoenix seems a likely candidate.)

Then, over the next 20 years, Urmson believes we’ll see a large portion of the transportation infrastructure move over to automation.

Urmson concluded his presentation by calling it an exciting time for roboticists. “It’s a pretty damn good time to be alive. We’re seeing fundamental transformations to the structure of labor and the structure transportation. To be a part of that and have a chance to be involved in it is exciting.”

7 Steps for Inventing the Future

Alan Kay is widely known for the credo, “The best way to predict the future is to invent it.” For him, the phrase is not just a witty quip; it is a guiding principle that has yielded a long list of accomplishments and continues to shape his work.

Kay was a ringleader of the exceptional group of ARPA-inspired scientists and engineers that created an entire genre of personal computing and pervasive world-wide networking. Four decades later, most of the information-technology industry and much of global commerce depends on this community’s inventions. Technology companies and many others in downstream industries have collectively realized trillions of dollars in revenues and tens of trillions in market value because of them.

Alan Kay made several fundamental contributions, including personal computers, object-oriented programming and graphical user interfaces. He was also a leading member of the Xerox PARC community that actualized those concepts and integrated them with other seminal developments, including the Ethernet, laser printing, modern word processing, client-servers and peer-peer networking. For these contributions, both the National Academy of Engineering and the Association of Computing Machinery have awarded him their highest honors.

I’ve worked with Alan to help bring his insights into the business realm for more than three decades. I also serve on the board of Viewpoints Research Institute, the nonprofit research organization that he founded and directs. Drawing on these vantage points and numerous conversations, I’ll try capture his approach to invention. He calls it a method for “escaping the present to invent the future,” and describes it in seven steps:

  1. Smell out a need
  2. Apply favorable exponentials
  3. Project the need 30 years out, imagining what might be possible in the context of the exponential curves
  4. Create a 30-year vision
  5. Pull the 30-year vision back into a more concrete 10- to 15-year vision
  6. Compute in the future
  7. Crawl your way there

Here’s a summary of each step:

1. Smell out a need

“Everybody loves change, except for the change part,” Kay observes. Because the present is so vivid and people have heavy incentives to optimize it, we tend to fixate on future scenarios that deliver incremental solutions to existing problems. To reach beyond the incremental, the first step to inventing the future is deep “problem finding,” rather than short-term problem solving. Smell out a need that is trapped by incremental thinking.

In Alan’s case, the need that he sensed in the late ’60s was the potential for computers to redefine the context of how children learn. Prompted by conversations with Seymour Papert at MIT and inspired by the work of Ivan Sutherland, J.C.R. Licklider, Doug Engelbart and others in the early ARPA community, Kay realized that every child should have a computer that helps him or her learn. Here’s how he described the insight:

It was like a magnet on the horizon. I had a lot of ideas but no really cosmic ones until that point.

This led Kay to wonder how computers could form a new kind of reading and writing medium that enabled important and powerful ideas to be discussed, played with and learned. But, the hottest computers at the time were IBM 360 mainframes costing millions. The use of computers in educating children was almost nonexistent. And, there were no such things as personal computers.

2. Apply favorable exponentials

To break the tyranny of current assumptions, identify exponential improvements in technological capabilities that could radically alter the range of possible approaches.

In 1965, Gordon Moore made his observation that computing would dramatically increase in power, and decrease in relative cost, at an exponential pace. Moore’s prediction, which would become known as Moore’s Law, was the “favorable exponential” that Kay applied.

Today, the fruits of Moore’s Law such as mobile devices, social media, cloud computing, big data, artificial intelligence and the Internet of Things continue to offer exponential advances favorable for invention. As I’ve previously written, these are make-or-break technologies for all information-intensive companies. But, don’t limit yourself to those.

Kay is especially optimistic about the favorable exponential at the intersection of computer-facilitated design, simulation and fabrication. This is the process of developing concepts and ideas using computer design tools and then testing and evolving them using computer-based simulation tools. Only after extensive testing and validation are physical components ever built, and, when they are, it can be done through computer-mediated fabrication, including 3D printing.

This approach applies to a wide range of domains, including mechanical, electrical and biological systems. It is becoming the standard method for developing everything, including car parts and whole cars, computer algorithms and chips, and even beating nature at its own game. Scientists and engineers realize tremendous benefits in terms of the number of designs that can be considered and the speed and rigor with which they can do so. These allow, Kay told me, “unbelievable leverage on the universe.”

See also: To Shape the Future, Write Its History  

3. Project the need 30 years out and imagine what might be possible in the context of the exponential curves

30 years is so far in the future that you don’t have to worry about how to get out there. Focus instead on what is important to have. There’s no possibility of being forced to demonstrate or prove how to get there incrementally.

Asking “How is this incremental to the present?” is the “biggest idea killer of all time,” Kay says. The answer to the “incremental” question is, he says, is “Forget it. The present is the least interesting time to live in.”

Instead, by projecting 30 years into the future, the question becomes, “Wouldn’t it be ridiculous if we didn’t have this?”

Projecting out what would be “ridiculous not to have” in 30 years led to many visionary concepts that earned Kay wide recognition as “the father of the personal computer.” He was sure, for example, that children would have ready access to laptop and tablets by the late 1990s — even though personal computers did not yet exist. As he saw it, there was a technological reason for it, there were user reasons for it and there were educational reasons for it. All those factors contributed to his misty vision, and he didn’t have to prove it because 30 years was so far in the future.

How might the world look relative to the needs that you smell out? What will you have ready access to in a world with a million times greater computing power, cheap 3D fabrication, boundless energy and so on? Remember, projecting to 2050 is intended as a mind-stretching exercise, not a precise forecasting one. This is where romance lives, albeit romance underpinned by deep science rather than pure fantasy.

4. Create a 30-year vision

A vision is different from a mission or a goal. If the previous step was about romance, a 30-year vision is more like a dream. It is a vague picture of a desirable future state of affairs in that 30-year future. This is the step where Kay’s recognition that computers would be widely available by the late 1990s turned into a vision of what form those computers might take.

That vision included the Dynabook, a powerful and portable electronic device the size of a three-ring notebook with a touch-sensitive liquid crystal screen and a keyboard for entering information. Here’s one of Kay’s early sketches of the Dynabook from that time.

DynaBook Concept Drawing

The next illustration is Kay’s sketch of the Dynabook in use. He describes the scenario as two 12-year-olds learning about orbital dynamics from a version of “Space Wars” that they wrote themselves. They are using two personal Dynabooks connected over a wireless network.

Children Using Dynabooks

Kay’s peers in the ARPA community had already envisioned some of the key building blocks for the Dynabook, such as LCD panels and an Internet-like, worldwide, self-healing network. (For a fascinating history of the early ARPA community, see Mitchell Waldrop’s brilliant book, “The Dream Machine.“)

For Kay, these earlier works crystallized into the Dynabook once he thought about them in the context of children’s education. As he described it,

The Dynabook was born when it had that cosmic purpose.

Laptops, notebook computers and tablets have roots in the early concepts of the Dynabook.

5. Pull the 30-year vision back into a 10- to 15-year lesser vision

Kay points out that one of the powerful aspects of computing is that, if you want to live 10 to 15 years in the future, you can do it. You just have to pay 10 to 20 times as much. That’s because tomorrow’s everyday computers can be simulated using today’s supercomputers. Instead of suffering the limitations of today’s commodity computers (which will be long obsolete before you get to the future you are inventing), inventors should use customized supercomputers to prototype, test and evolve aspects of their 30-year vision. Pulling back into the 10- to 15-year window brings inventors back from the “pie in the sky” to something more concrete.

Jumping into that “more concrete” future is exactly what Alan Kay did in 1971 when he joined the Xerox Palo Alto Research Center (PARC) effort to build “the office of the future.”

It started with Butler Lampson and Chuck Thacker, two of PARC’s leading engineers, asking Kay, “How would you like us to build your little machine?” The resulting computer was an “interim Dynabook,” as Kay thought of it, but better known as the Xerox Alto.

Xerox Alto

The Alto was the hardware equivalent of the Apple Macintosh of 1988, but running in 1973. Instead of costing a couple of thousand dollars each, the Alto cost about $70,000 (in today’s dollars). PARC built 2,000 of them — thereby providing Kay and his team with the environment to develop the software for a 15-year, lesser-but-running version of his 30-year vision.

6. Compute in the future

Now, having created the computing environment of the future, you can invent the software. This approach is critical because the hardest thing about software is getting from requirements and specification to properly running code.

Much of the time spent in developing software is spent optimizing code for the limitations of the hardware environment—i.e., making it run fast enough and robust enough. Providing a more powerful, unconstrained futuristic computing environment frees developers to focus on invention rather than optimization. (This was the impetus for another Kay principle, popularized by Steve Jobs, that “People who are really serious about software should make their own hardware.”)

The Alto essentially allowed PARC researchers to simulate the laptop of the future. Armed with it, Kay was a visionary force at PARC.

Kay led the Learning Research Group at PARC, and, though PARC’s mission was focused on the office environment, Kay rightly decided that the best path toward that mission was to focus on children in educational settings. He and his team studied how children could use personal computers in different subject areas. They studied how to help children learn to use computers and how children could use computers to learn. And, they studied how the computers needed to be redesigned to facilitate such learning.

Children With Xerox Alto

The power of the Alto gave Kay and his team, which included Adele Goldberg, Dan Ingalls, Ted Kaehler and Larry Tesler, the ability to do thousands of experiments with children in the process of understanding these questions and working toward better software to address them.

We could have a couple of pitchers of beer at lunch, come back, and play all afternoon trying out different user interface ideas. Often, we didn’t even save the code.

For another example of the “compute in the future” approach, take Google’s driverless car. Rather than using off-the-shelf or incrementally better car components, Google researchers used state of the art LIDAR, cameras, sensors and processors in its experimental vehicles. Google also built prototype vehicles from scratch, in addition to retrofitting current cars models. The research vehicles and test environments cost many times as much as standard production cars and facilities. But, they were not meant for production. Google’s researchers know that Moore’s Law and other favorable exponentials will soon make their research platforms practical.

Its “computing in the future” platforms allow Google to invent and test driving algorithms on car platforms of the future today. Google greatly accelerated the state of the art of driverless cars and ignited a global race to perfect the technology. Google recently spun off a separate company, Waymo, to commercialize the fruits of this research.

Waymo’s scientists and engineers are learning from a fleet of test vehicles driving 10,000 to 15,000 miles a week on public roads and interacting with real infrastructure, weather and traffic (including other drivers). The developers are also taking advantage of Google’s powerful cloud-based data and computing environment to do extensive simulation-based testing. Waymo reports that it is running its driving algorithms through more than three million miles of simulated driving each day (using data collected by its experimental fleet).

See also: How to Master the ABCs of Innovation  

7. Crawl your way there

Invention requires both inspiration and perspiration. Inspired by this alternative perspective of thinking about their work, researchers can much more effectively channel their perspiration. As Kay is known for saying, “Point of view is worth 80 IQ points.”

PARC’s success demonstrates that even if one pursues a 15-year vision — or, more accurately, because one pursues such a long-term vision — many interim benefits might well come of the effort. And, while the idea of giving researchers 2,000 supercomputers and building custom software environments might seem extravagant and expensive, it is actually quite cheap when you consider how much you can learn and invent.

Over five glorious years in the early 1970s, the work at PARC drove the evolution of much of future computing. The software environment advanced to become more user-friendly and supportive of communications and different kinds of media. This led to many capabilities that are de rigueur today, including graphical interfaces, high quality bit-mapped displays, what-you-see-is-what-you-get (WYSISYG) word processing and page layout applications. The hardware system builders learned more about what it would take to support future applications and also evolved accordingly. This led to hardware designs that better supported the display of information, network communications and connecting to peripherals, rather than being optimized for number crunching. Major advancements included Ethernet, laser printing, peer-to-peer and client server computing and internetworking.

Kay estimates that the total budget for the parts of Xerox PARC that contributed to these inventions was about $50 million in today’s dollars. Compare that number to the hundreds of billions of dollars that Xerox directly earned from the laser printer.

Xerox 9700 Printers

Although the exact number is hard to calculate, the work at PARC also unlocked trillions reaped by other technology-related businesses.

One of the most vivid illustrations of the central role that Xerox played was a years-later interchange between Steve Jobs and Bill Gates. In response to Jobs’ accusation that Microsoft was stealing ideas from the Mac, Gates tells him:

Well, Steve, I think there’s more than one way of looking at it. I think it’s more like we both had this rich neighbor named Xerox, and I broke into his house to steal the TV set and found out that you had already stolen it.

Kay cautions that his method is not a cookbook for invention. It is more like a power tool that needs to be wielded by skilled hands.

It is also a method that has been greatly enabled by Kay and his colleagues’ inventions. Beyond the technology industry that they helped spawned, their inventions also underscore discovery and innovation in every field of science and technology, including chemistry, biology, engineering, health and agriculture. Information technology is not only a great invention; it has reinvented invention. It powers the favorable exponential curves upon which other inventors can escape the present and invent the future.

See also: How We’re Wired to Make Bad Decisions

For his part, Kay continues to lead research at the frontiers of computing, with a continued emphasis on human advancement. In addition to his Viewpoints Research Institute, he recently helped to formulate the Human Advance Research Community (HARC) at YC Research, the non-profit research arm of Y Combinator. HARC’s mission is “to ensure human wisdom exceeds human power, by inventing technology that allows all humans to see further and understand more deeply.”

That is a future worth inventing.

To Shape the Future, Write Its History

“History will be kind to me, for I intend to write it myself.” — Winston Churchill

When it comes to large-scale innovation, my experience is that history will indeed be kinder if aspiring innovators take the time to write it themselves—but before it actually unfolds, not after.

Every ambitious strategy has multiple dimensions and depends on complex interactions between a host of internal and external factors. Success requires achieving clarity and getting everyone on the same page for the challenging transition to new business and operational models. The best mechanism for doing that is one I have used often, to powerful effect. I call it a “future history.”

Future histories fulfill our human need for narratives. As much as we like to think of ourselves as modern beings, we still have a lot in common with our earliest ancestors gathered around a fire outside a cave. We need stories to crystallize and internalize abstract concepts and plans. We need shared stories to unite us, and guide us toward a collective future.

Future histories provide that story for large organizations.

See also: What Is the Right Innovation Process?  

The CEO of a major financial services company occasionally still reads to internal audiences parts of the future histories that I helped him and his management team write in early 2011. He says they helped him get his team focused on the right opportunities. As of this writing, his company’s stock has almost doubled, even though his competitors have had problems.

To create future histories, I have executive teams imagine that they are five years in the future and ask them to write two memos of perhaps 750 to 1,000 words each.

For the first memo, I ask them to imagine that the strategy has failed because of some circumstance or because of resistance from some parts of the organization, investors, customers or other key stakeholder. The memo should explain the failure. The exercise lets people focus on the most critical assumptions and raise issues without being seen as naysayers. There is usually no lack of potential problems to consider, including technology developments, employee resistance, customer activities, competitors’ actions, governmental actions, substitute products and so on. Articulating the rationale for failure in a clearly worded memo crystallizes thinking about the most likely issues.

To heighten the effect, I sometimes do some formatting and structure the memo like an article from the Wall Street Journal or New York Times. Adopting a journalist’s voice helps to focus the narrative on the most salient points. And everybody hates the idea of being embarrassed in such publications, so readers of the memo pay attention to the potential problems while there’s still time to address them.

The second memo is the success story. What key elements and events helped the organization shake its complacency? What key strategic or technological shifts helped to capture disruptive opportunities? How did the organization’s unity help it to out-innovate existing players and start-ups? This part of the exercise encourages war-gaming and helps the executive team understand the milestones on the path to success.

Taken together, the future histories provide a new way of thinking about the long-term aspirations of the organization and the challenges facing it. By producing a chronicle of what could be the major success and most dreaded failures, the organization gains clarity about the levers it needs to pull to succeed and the pitfalls it needs to avoid.

Most importantly, by working together to write the future histories, the executive team develops a shared narrative of those potential futures. It forges alignment around the group’s aspirations, critical assumptions and interdependencies. The process of drafting and finalizing the future histories also prompts the team to articulate key questions and open issues. It drives consensus about key next steps and the overall change management road map. In a few weeks’ time, future histories can transform the contemplated strategy into the entire team’s strategy.

See also: How to Create a Culture of Innovation  

Future histories also facilitate the communication of that shared strategy to the rest of the organization. Oftentimes, senior executives extend the process to more layers of management to flesh out the success and failure scenarios in greater detail and build wider alignment.

Future histories take abstract visions and strategies and make them real, in ways that get people excited. They help people understand how they can contribute—how they must contribute—even if they aren’t directly involved in the innovation initiative. People can understand the timing and see how efforts will build.

People can also focus on the enemies that, as a group, they must fend off. These enemies may no longer be saber-toothed tigers, but they are still very real and dangerous to corporations. “Future histories” unite teams as they face the inevitable challenges.

Who Is Leading in Driverless Cars?

Imagine if you could pick between Uber drivers based on their driving experience. Would you hire an experienced driver who has logged hundreds of thousands of road miles or one who has driven just a few hundred miles? I’ll bet you’d go with the experienced driver.

Now apply the same question to driverless cars. How would you pick? The same logic applies: Go with experience.

By the miles-driven heuristic, recent reports released by the California Department of Motor Vehicles show that Waymo (the new Alphabet spinout previously known as Google’s Self-Driving Car program) is running laps around its competitors. As with human drivers, experience matters for driverless capabilities. That’s because the deep learning AI techniques used to train driverless cars depend on data—especially data that illuminates rare and dangerous “edge cases.” The more training data, the more confidence you can have in the results.

See also: How to Picture the Future of Driverless  

In 2016, Waymo logged more than 635,000 miles while testing its autonomous vehicles on California’s public roads compared to just over 20,000 for all its competitors combined.

As the W. Edwards Deming principle that is popular in Silicon Valley goes, “In God we trust, all others bring data.” The data shows that Waymo is not only 615,000 miles ahead of its competitors but that those competitors are still neophytes when it comes to proving their technology on real roads and interacting with unpredictable elements such as infrastructure, traffic and human drivers.

Now, there are lots of ways to cut the data and therefore a lot of provisos to the simple test-miles-driven heuristic.

Waymo also leads the others in terms of fewer “disengagements,” which refers to when human test drivers have to retake control from the driverless software. Waymo’s test drivers had to disengage 124 times, or about once very 5,000 miles.

Other companies were all over the map in terms of their disengagements. BMW had one disengagement during 638 total miles of testing. Tesla had 182 disengagements in 550 miles. Mercedes-Benz had 336 disengagements over 673 miles. Fewer miles might mean fewer edge cases were encountered, or it might mean that those companies tested particularly difficult scenarios. But, low total miles driven casts doubt on the readiness of any system for operating on public roads. Until other contenders ramp up their total miles by a factor or 1,000 or more, their disengagement statistics are not statistically relevant.

Tesla fans could rightly point to the more than two hundred million miles that Tesla owners have logged under Tesla’s Autopilot feature. Those miles are not considered here. (Autopilot is not defined as autonomous under California law, so Tesla is not required to report disengagements to the California DMV.) But, no doubt, all those miles means that Tesla’s Autopilot software is probably very well trained for highway driving.

What do those highway miles tell us about Tesla’s ability to handle city streets, which are more complex for driverless cars? Not much, but the 550 miles that Tesla did spend on public road autonomous testing speaks volumes about its dearth of experiential learning on city streets. (Ed Niedermeyer, an industry analyst, recently argued that most of Tesla’s 550 miles were probably logged while filming one marketing video.)

See also: Novel Solution for Driverless Risk  

It should also be noted that the reported data applies only to California; it does not account for testing in other active driverless hubs—such as Waymo’s test cars in Austin, TX, Uber’s driverless pilots in Pittsburgh or nuTonomy’s testing in Singapore (just to name a few). It is safe to guess, however, that a significant percentage of all autonomous testing has been logged in California.

Notably missing from the reports to the California DMV are all other Big Auto makers and suppliers—and other players cited or rumored as driverless contenders, like Apple and Baidu. They might well be learning to drive on private test tracks or outside of California. But, until they bring data about their performance after significant miles on public roads, don’t trust the press releases or rumors about their capabilities.

Waymo’s deep experience in California does not guarantee its victory. Can it stay ahead as others accelerate? That remains to be seen, but it is clear from the California DMV reports that Waymo is way ahead on the driverless learning curve.

10 Questions That Reveal AI’s Limits

AI developers are making amazing advances. Witness the excitement around AI’s progress in search, cancer diagnosis, genomic medicine, autonomous vehicles, Go, smart homes, machine translation, and even lip reading.

Progress in such complex problems raises hopes for the development of general-purpose AI that can be deployed in a wide range of intelligent, open-ended interactions with people like computer interface, customer service, planning and advice.

Photographer: Michael Nagle/Bloomberg

It is easy to imagine an enhanced Apple Siri, Amazon Alexa or IBM Watson that engages in conversations with people to answer questions, fulfill commands and even anticipate needs. In fact, unless you watch marketing videos with a very critical eye (like the latest one for Alexa shown below), you might even believe that AI has already reached this point.

Unfortunately, AI is far from this level of intelligence. AI lacks the capability to understand, much less answer, many kinds of easy questions that we might pose to human assistants, agents, advisors and friends.

Imagine asking this question of some AI-enhanced tool in the foreseeable future:

I am thinking about driving to New York from my home in Vermont next week. What do you think?

Most such tools will easily offer a wealth of data, like possible routes, including distances, travel times, attractions, rest stops, and restaurants. Some might incorporate historical traffic patterns for different times of day and even weather forecasts to recommend particular routes.

See also: Could Alexa Testify Against You?  

But, as the noted AI researcher Roger Schank smartly lays out in a recent article, there are many aspects of this question that AI tools will not address adequately any time soon—but that any person could easily do so now.

Understanding such limitations is key to understanding the near term potential of AI and what it really means to be “intelligent.”

Schank points out that a person who knows you would know much about what you are really asking. For example, is your old car up to the task? Are you up to making the drive? Would you enjoy it? How might Broadway show schedules affect your decision about whether or when to go?

“Real conversation involves people who make assessments of each other and know what to say to whom based on their previous relationship and what they know about each other,” Schank writes. “Sorry, but no ‘AI’ is anywhere near being able to have such a conversation because modern AI is not building complex models of what we know about each other.”

In additional to the above question, Schank offers nine other questions that illustrate what people can easily answer but AI cannot:

  1. What would be the first question you would ask Bob Dylan if you were to meet him?
  2. Your friend told you, after you invited him for dinner, that he had just ordered pizza. What will he eat? Will he use a knife and fork? Why won’t he change his plans?
  3. Who do you love more, your parents, your spouse, or your dog?
  4. My friend’s son wants to drop out of high school and learn car repair. I told her to send him over. What advice do you think I gave him?
  5. I just saw an ad for IBM’s Watson. It says it can help me make smarter decisions. Can it?
  6. Suppose you wanted to write a novel and you met Stephen King. What would you ask him?
  7. Is there anything else I need to know?
  8. I can’t figure out how to grow my business. Got any ideas?
  9. Does what I am writing make sense?

Answering these kinds of questions, Schank points out, requires robust models of the world. How do mechanical, social and economic systems work? How do people relate to one another? What are our expectations about what is reasonable and what is not?

Answering Question 2, for example, requires an understanding of how people function in daily life. It requires knowing that people intend to eat food that they order and that pizza is typically eaten with one’s hands.

Answering Question 5 requires analyzing lots of data, which AI can do, and thus help in making better decisions. But, actually making better decisions also requires prioritizing goals and anticipating the consequences of complex actions.

Answering open-ended questions like Question 7 requires knowing the context of the question and to whom you are talking.

Answering advice-seeking questions like Question 8 requires the use of prior experiences to predict future scenarios. Quite often, such advice is illustrated with personal stories.

See also: Insights on Insurance and AI  

Many AI researchers (like Schank) have explored such capabilities but none have mastered them. That does not mean that they never will. It does mean that applications that depend on such capabilities will be much more brittle and far less intelligent than is required.

One way of thinking about AI is that it consists of the leading edges of computer science. Mind-bending computational capabilities are being developed in numerous application domains and deserve your attention. Generalizing those capabilities to human level intelligence, and therefore assuming their widespread applicability, is premature.

Having a clear-eyed view of what AI can and cannot do is key to making good decisions about this disruptive technology—and leaving the irrational exuberance to others.