Tag Archives: chunka mui

How to Innovate With Microservices (Part 1)

Whether you are part of building a modern digital enterprise platform for mid-sized to large insurance companies or part of a startup that distinguishes itself through innovative technologies, you are likely to be hearing about microservices.

Microservices architecture has increasingly become popular and often associated with benefits such as scale, speed of change, ease of integration, fault tolerance and ability to adapt to changing business demands and models. Commitment from digital giants such as Amazon, Netflix, PayPal, eBay, Twitter and Uber, which built and scaled their platforms based on microservices architecture, has galvanized adoption across many industries.

Source: Google Trends

A crucial question is, “How will microservices help insurers design open platforms for building sustainable competitive advantage?”

This four-part blog series will share our views based on our experience in building a modern digital platform using microservices. This first blog will provide a general primer about microservices. The second will share our view on the applicability and strategic potential of microservices for insurance. The third will illustrate best practices and applied principles of designing a microservices-based platform. The final blog will share how our innovative Majesco Digital1st platform will help insurers simplify and accelerate the development of microservices apps.

Let’s start with the basic question, “What are microservices?” You can find the answer through a simple Google search, but let’s explain it in simple terms. Think of a microservice as a micro application that enables a specific granular business function like payment, issue, policy documents, first notice of loss (FNOL), etc.  The micro application can be independently deployed and can communicate with other micro applications serving other business functions through a well-defined interface. This approach is in stark contrast to “monolith applications,” such as policy management systems, billing systems and claims systems that work as an aggregation of multiple business functions tightly woven together and must be deployed as a large, monolithic unit.

See also: It’s Time to Accelerate Digital Change  

An architectural pattern called self-contained-service (SCS) is often discussed along with microservices but does not provide the full benefits of microservices. The SCS pattern recommends putting cohesive services together as a self-contained, individually deployable unit. Because the individual services are no longer self-contained and individually deployable, they cannot be considered microservices. While this approach is better than the monolithic application, it is instead building multiple small monoliths!

So why does anyone advocate the microservices approach? Simply put, it addresses the issues of monolith architectures that inhibit digital models. Even after functional decomposition and the use of several deployment artifacts with monolith architectures, they are still part of a single code base that must be managed as a single deployment unit.

In contrast, a microservices architecture has the following advantages when done well:

  1. Velocity and Agility – Maintenance and evolution of monolith applications is expensive and slow due to inadvertent side effects, because they affect other functions and services. Dealing with the side effects requires additional work, including vital tasks such as impact analysis, elaborate and expensive testing and forcing changes into large and infrequent releases to optimize testing efforts. In contrast, a microservice is a low-impact, single-responsibility business function that performs its own individual tasks, manages its own data and communicates with other microservices through a well-defined interface. It allows you to make and deploy changes reliably, incrementally and more quickly, in contrast to monolith architectures.
  2. Scale – Microservices allow easy monitoring that can predict seasonal or unique business demands on a business function. Because each microservice runs in its own process, it can easily be scaled with elastic containers, which efficiently scale up and down. In comparison, a monolith architecture runs multiple business functions under a single process, making it harder to orchestrate the feeding of resources to targeted business functions.
  3. Decentralized Governance and Teams – The separated code base of microservices allows different parts of an organization to build business functions as opposed to a centralized large team. Each team can manage different microservices with full DevOps (development and operations) responsibility and accountability. This gives insurers the freedom to choose the technology best-suited for the business function.
  4. Self-Contained and Sustainable – With monolithic applications, when introducing a new business capability that requires the upgrade of external dependencies (OS, shared libraries, etc.) the entire application must be tested. In contrast, microservices are self-contained from OS down to the actual code required for implementation. This enables microservices to separately and individually upgrade without affecting unrelated application functions based on business/operational needs. This keeps the application stack relevant and avoids the risk of running applications on an obsolete technology stack.
  5. Hypothesis-Driven Development – The advantages outlined above lead to a completely different way of contemplating software development. The focus and conversation shifts from managing projects and defect backlogs to emphasizing new opportunities, experimentation and observing the application usage. Experimental software changes can be built and deployed quicker in small increments into production. When errors happen, they can be fixed in minutes and hours, rather than days or months. For major problems, the incremental functionality upgrade can quickly and easily be rolled back without loss of major functionality or downtime.

As with all innovation, there is a flip side to the coin. Unfortunately, not all organizations are ready to adopt a microservices architecture immediately. In particular, if a company cannot build a well-designed monolith, then building a microservices platform will be much harder. Microservices architecture is inherently complex to develop as well as operate, but the rewards of the complexity are worth the hurdles, because microservices will give the reconstructed organization far greater efficiency and capabilities focused on the future.

Fundamentally, microservices require organizational change, not just adoption of a technology pattern. Organizations must rethink end-to-end DevOps by thinking in terms of small business functions, distributed teams, decentralized governance and continuous delivery. In addition, the organization must embrace multiple technologies suited for a business platform rather than a single technology platform, which is a significant change for organizations schooled in building applications using traditional software development processes.

Even success stories like Amazon and Netflix did not start with a microservices architecture; rather, they evolved overtime as they matured. If you are building a MVP (minimum viable product) as a startup, it may not be advisable to delay market launch due to the large up-front effort of establishing microservices. However, startups should consider that at some point they’ll have to invest and migrate to microservices to support scalability and changing business models.

Operating a platform made of hundreds or thousands of microservices, while enabling scalability and growing business demands, does create tremendous complexity for deployment, auto-scaling, monitoring, logging and many other DevOps aspects. Microservices deployment at Amazon and Netflix (Images by AppCentrica) show the complexity of managing a reliable business operation with millions of continuing deployments within an ecosystem of microservices — often written using different languages and databases. Companies like Amazon and Netflix deal with this complexity through a high degree of automation and significant investment into sharing and automating the infrastructure to build resiliency.

Despite the complexity in managing microservices, separation of responsibilities across microservices offers organizations significant benefits in today’s platform economy. We outline these in our thought leadership report, Cloud Business Platform: The Path to Digital Insurance 2.0. The constant pivoting of business priorities requires a continuous and high degree of system changes that enable new strategies. Microservices can bring great value to agility, velocity, availability, scalability and accountability across both technical and business organizational dimensions.

See also: A New Way to Develop Products  

We believe that every organization should exercise patient urgency, which author and futurist Chunka Mui describes as “the combination of foresight to prepare for a big idea, willingness to wait for the right market conditions and agility to act straight away when conditions ripen.”

We look forward to covering our views on the role of microservices in insurance in Part 2. Please share your views on this exciting topic in the comments section. We would enjoy hearing your perspective.

This article was written by Manish Shah and Sachin Dhamane.

When Will the Driverless Car Arrive?

When Chris Urmson talks about driverless cars, everyone should listen. This has been true throughout his career, but it is especially true now.

Few have had better vantage points on the state of the art and the practical business and engineering challenges of building driverless cars. Urmson has been at the forefront for more than a decade, first as a leading researcher at CMU, then as longtime director of Google’s self-driving car (SDC) program and now as CEO of a driverless car dream team at Aurora Innovation.

Urmson’s recent “Perspectives on Self-Driving Cars” lecture at Carnegie Mellon was particularly interesting because he has had time to absorb the lessons from his long tenure at Google and translate those into his next moves at Aurora. He was also in a thoughtful space at his alma mater, surrounded by mentors, colleagues and students. And, it is early enough in his new startup’s journey that he seemed truly in “perspective” rather than “pitch” mode.

The entire presentation is worth watching. Here are six takeaways:

1. There is a lot more chaos on the road than most recognize.

Much of the carnage due to vehicle accidents is easy to measure. In 2015, in just the U.S., there were 35,092 killed and 2.4 million injured in 6.3 million police-reported vehicle accidents. Urmson estimates, however, that the real accident rate is really between two and 10 times greater.
Over more than two million test miles during his Google tenure, Google’s SDCs were involved in about 25 accidents. Most were not severe enough to warrant a regular police report (they were reported to the California DMV). The accidents mostly looked like this: “Self-driving car does something reasonable. Comes to a stop. Human crashes into it.” Fender bender results.
While we talk a lot about fatalities or police-reported accidents, Urmson said, “there is a lot of property damage and loss that can be cleaned up relatively easily” with driverless technology.
2. Human intent is the fundamental challenge for driverless cars.
The choices made by driverless cars are critically dependent on understanding and matching the expectations of human drivers. This includes both humans in operational control of the cars themselves and human drivers of other cars. For Urmson, the difficulty in doing this is “the heart of the problem” going forward.
To illustrate the “human factors” challenge, Urmson dissected three high-profile accidents. (He cautioned that, in the case of the Uber and Tesla crashes, he had no inside information and was piecing together what probably happened based on public information.)

Google Car Crashes With Bus; Santa Clara Transportation Authority

In the only accident where Google’s SDC was partially at fault, Google’s car was partially blocking the lane of a bus behind it (due to sand bags in its own lane). The car had to decide whether to wait for the bus to pass or merge fully into the lane. The car predicted that the remaining space in the bus’s lane was too narrow and that the bus driver would have to stop. The bus driver looked at the situation and thought “I can make it,” and didn’t stop. The car went. The bus did, too. Crunch.

Uber’s Arizona Rollover

Uber Driverless Car Crashes In Tempe, AZ

The Uber SDC was in the leftmost lane of three lanes. The traffic in the two lanes to its right were stopped due to congested traffic. The Uber car’s lane was clear, so it continued to move at a good pace.

A human driver wanted to turn left across the three lanes. The turning car pulled out in front of the cars in the two stopped lanes. The driver probably could not see across the blocked lanes to the Uber car’s lane and, given the stopped traffic, expected that whatever might be driving down that lane would be moving slower. It pulled into the Uber car’s lane to make the turn, and the result was a sideways parked car.

See also: Who Is Leading in Driverless Cars?  

Tesla’s Deadly Florida Crash

Tesla Car After Fatal Crash in Florida

The driver had been using Tesla’s Autopilot for a long time, and he trusted it—despite Tesla saying, “Don’t trust it.” Tesla user manuals told drivers to keep their hands on the wheel, eyes in front, etc. The vehicle was expecting that the driver was paying attention and would act as the safety check. The driver thought that Autopilot worked well enough on its own. A big truck pulled in front of the car. Autopilot did not see it. The driver did not intervene. Fatal crash.

Tesla, to its credit, has made modifications to improve the car’s understanding about whether the driver is paying attention. To Urmson, however, the crash highlights the fundamental limitation of relying on human attentiveness as the safety mechanism against car inadequacies.

3. Incremental driver assistance systems will not evolve into driverless cars.

Urmson characterized “one of the big open debates” in the driverless car world as between Tesla’s (and other automakers’) vs. Google’s approach. The former’s approach is “let’s just keep on making incremental systems and, one day, we’ll turn around and have a self-driving car.” The latter is “No, no, these are two distinct problems. We need to apply different technologies.”

Urmson is still “fundamentally in the Google camp.” He believes there is a discrete step in the design space when you have to turn your back on human intervention and trust the car will not have anyone to take control. The incremental approach, he argues, will guide developers down a selection of technologies that will limit the ability to bridge over to fully driverless capabilities.

4. Don’t let the “Trolley Car Problem” make the perfect into the enemy of the great.

The “trolley car problem” is a thought experiment that asks how driverless cars should handle no-win, life-threatening scenarios—such as when the only possible choices are between killing the car’s passenger or an innocent bystander. Some argue that driverless cars should not be allowed to make such decisions.

Urmson, on the other hand, described this as an interesting philosophical problem that should not be driving the question of whether to bring the technology to market. To let it do so would be “to let the perfect be the enemy of the great.”

Urmson offered a two-fold pragmatic approach to this ethical dilemma. First, cars should never get into such situations. “If you got there, you’ve screwed up.”  Driverless cars should be conservative, safety-first drivers that can anticipate and avoid such situations. “If you’re paying attention, they don’t just surprise and pop out at you,” he said. Second, if the eventuality arose, a car’s response should be predetermined and explicit. Tell consumers what to expect and let them make the choice. For example, tell consumers that the car will prefer the safety of pedestrians and will put passengers at risk to protect pedestrians. Such an explicit choice is better than what occurs with human drivers, Urmson argues, who react instinctually because there is not enough time to make any judgment at all.

5. The “mad rush” is justified.

Urmson reminisced about the early days when he would talk to automakers and tier 1 suppliers about the Google program and he “literally got laughed at.”  A lot has changed in the last five years, and many of those skeptics have since invested billions in competing approaches.

Urmson points to the interaction between automation, environmental standards, electric vehicles and ride sharing as the driving forces behind the rush toward driverless. (Read more about this virtuous cycle.) Is it justified? He thinks so, and points to one simple equation to support his position:

3 Trillion VMT * $0.10 per mile = $300B per year

In 2016, vehicles in the U.S. traveled about 3.2 trillion miles. If you could bring technology to bear to reduce the cost or increase the quality of those miles and charge 10 cents per mile, that would add up to $300 billion in annual revenue—just in the U.S.

This equation, he points out, is driving the market infatuation with Transportation as a Service (TaaS) business models. The leading contenders in the emerging space, Uber, Lyft and Didi, have a combined market valuation of about $110 billion—roughly equal to the market value of GM, Ford and Chrysler. Urmson predicts that one of these clusters will see its market value double in the next four years. The race is to see who reaps this increased value.

See also: 10 Questions That Reveal AI’s Limits  

6. Deployment will happen “relatively quickly.”

To the inevitable question of “when,” Urmson is very optimistic.  He predicts that self-driving car services will be available in certain communities within the next five years.

You won’t get them everywhere. You certainly are not going to get them in incredibly challenging weather or incredibly challenging cultural regions. But, you’ll see neighborhoods and communities where you’ll be able to call a car, get in it, and it will take you where you want to go.

(Based on recent Waymo announcements, Phoenix seems a likely candidate.)

Then, over the next 20 years, Urmson believes we’ll see a large portion of the transportation infrastructure move over to automation.

Urmson concluded his presentation by calling it an exciting time for roboticists. “It’s a pretty damn good time to be alive. We’re seeing fundamental transformations to the structure of labor and the structure transportation. To be a part of that and have a chance to be involved in it is exciting.”

7 Steps for Inventing the Future

Alan Kay is widely known for the credo, “The best way to predict the future is to invent it.” For him, the phrase is not just a witty quip; it is a guiding principle that has yielded a long list of accomplishments and continues to shape his work.

Kay was a ringleader of the exceptional group of ARPA-inspired scientists and engineers that created an entire genre of personal computing and pervasive world-wide networking. Four decades later, most of the information-technology industry and much of global commerce depends on this community’s inventions. Technology companies and many others in downstream industries have collectively realized trillions of dollars in revenues and tens of trillions in market value because of them.

Alan Kay made several fundamental contributions, including personal computers, object-oriented programming and graphical user interfaces. He was also a leading member of the Xerox PARC community that actualized those concepts and integrated them with other seminal developments, including the Ethernet, laser printing, modern word processing, client-servers and peer-peer networking. For these contributions, both the National Academy of Engineering and the Association of Computing Machinery have awarded him their highest honors.

I’ve worked with Alan to help bring his insights into the business realm for more than three decades. I also serve on the board of Viewpoints Research Institute, the nonprofit research organization that he founded and directs. Drawing on these vantage points and numerous conversations, I’ll try capture his approach to invention. He calls it a method for “escaping the present to invent the future,” and describes it in seven steps:

  1. Smell out a need
  2. Apply favorable exponentials
  3. Project the need 30 years out, imagining what might be possible in the context of the exponential curves
  4. Create a 30-year vision
  5. Pull the 30-year vision back into a more concrete 10- to 15-year vision
  6. Compute in the future
  7. Crawl your way there

Here’s a summary of each step:

1. Smell out a need

“Everybody loves change, except for the change part,” Kay observes. Because the present is so vivid and people have heavy incentives to optimize it, we tend to fixate on future scenarios that deliver incremental solutions to existing problems. To reach beyond the incremental, the first step to inventing the future is deep “problem finding,” rather than short-term problem solving. Smell out a need that is trapped by incremental thinking.

In Alan’s case, the need that he sensed in the late ’60s was the potential for computers to redefine the context of how children learn. Prompted by conversations with Seymour Papert at MIT and inspired by the work of Ivan Sutherland, J.C.R. Licklider, Doug Engelbart and others in the early ARPA community, Kay realized that every child should have a computer that helps him or her learn. Here’s how he described the insight:

It was like a magnet on the horizon. I had a lot of ideas but no really cosmic ones until that point.

This led Kay to wonder how computers could form a new kind of reading and writing medium that enabled important and powerful ideas to be discussed, played with and learned. But, the hottest computers at the time were IBM 360 mainframes costing millions. The use of computers in educating children was almost nonexistent. And, there were no such things as personal computers.

2. Apply favorable exponentials

To break the tyranny of current assumptions, identify exponential improvements in technological capabilities that could radically alter the range of possible approaches.

In 1965, Gordon Moore made his observation that computing would dramatically increase in power, and decrease in relative cost, at an exponential pace. Moore’s prediction, which would become known as Moore’s Law, was the “favorable exponential” that Kay applied.

Today, the fruits of Moore’s Law such as mobile devices, social media, cloud computing, big data, artificial intelligence and the Internet of Things continue to offer exponential advances favorable for invention. As I’ve previously written, these are make-or-break technologies for all information-intensive companies. But, don’t limit yourself to those.

Kay is especially optimistic about the favorable exponential at the intersection of computer-facilitated design, simulation and fabrication. This is the process of developing concepts and ideas using computer design tools and then testing and evolving them using computer-based simulation tools. Only after extensive testing and validation are physical components ever built, and, when they are, it can be done through computer-mediated fabrication, including 3D printing.

This approach applies to a wide range of domains, including mechanical, electrical and biological systems. It is becoming the standard method for developing everything, including car parts and whole cars, computer algorithms and chips, and even beating nature at its own game. Scientists and engineers realize tremendous benefits in terms of the number of designs that can be considered and the speed and rigor with which they can do so. These allow, Kay told me, “unbelievable leverage on the universe.”

See also: To Shape the Future, Write Its History  

3. Project the need 30 years out and imagine what might be possible in the context of the exponential curves

30 years is so far in the future that you don’t have to worry about how to get out there. Focus instead on what is important to have. There’s no possibility of being forced to demonstrate or prove how to get there incrementally.

Asking “How is this incremental to the present?” is the “biggest idea killer of all time,” Kay says. The answer to the “incremental” question is, he says, is “Forget it. The present is the least interesting time to live in.”

Instead, by projecting 30 years into the future, the question becomes, “Wouldn’t it be ridiculous if we didn’t have this?”

Projecting out what would be “ridiculous not to have” in 30 years led to many visionary concepts that earned Kay wide recognition as “the father of the personal computer.” He was sure, for example, that children would have ready access to laptop and tablets by the late 1990s — even though personal computers did not yet exist. As he saw it, there was a technological reason for it, there were user reasons for it and there were educational reasons for it. All those factors contributed to his misty vision, and he didn’t have to prove it because 30 years was so far in the future.

How might the world look relative to the needs that you smell out? What will you have ready access to in a world with a million times greater computing power, cheap 3D fabrication, boundless energy and so on? Remember, projecting to 2050 is intended as a mind-stretching exercise, not a precise forecasting one. This is where romance lives, albeit romance underpinned by deep science rather than pure fantasy.

4. Create a 30-year vision

A vision is different from a mission or a goal. If the previous step was about romance, a 30-year vision is more like a dream. It is a vague picture of a desirable future state of affairs in that 30-year future. This is the step where Kay’s recognition that computers would be widely available by the late 1990s turned into a vision of what form those computers might take.

That vision included the Dynabook, a powerful and portable electronic device the size of a three-ring notebook with a touch-sensitive liquid crystal screen and a keyboard for entering information. Here’s one of Kay’s early sketches of the Dynabook from that time.

DynaBook Concept Drawing

The next illustration is Kay’s sketch of the Dynabook in use. He describes the scenario as two 12-year-olds learning about orbital dynamics from a version of “Space Wars” that they wrote themselves. They are using two personal Dynabooks connected over a wireless network.

Children Using Dynabooks

Kay’s peers in the ARPA community had already envisioned some of the key building blocks for the Dynabook, such as LCD panels and an Internet-like, worldwide, self-healing network. (For a fascinating history of the early ARPA community, see Mitchell Waldrop’s brilliant book, “The Dream Machine.“)

For Kay, these earlier works crystallized into the Dynabook once he thought about them in the context of children’s education. As he described it,

The Dynabook was born when it had that cosmic purpose.

Laptops, notebook computers and tablets have roots in the early concepts of the Dynabook.

5. Pull the 30-year vision back into a 10- to 15-year lesser vision

Kay points out that one of the powerful aspects of computing is that, if you want to live 10 to 15 years in the future, you can do it. You just have to pay 10 to 20 times as much. That’s because tomorrow’s everyday computers can be simulated using today’s supercomputers. Instead of suffering the limitations of today’s commodity computers (which will be long obsolete before you get to the future you are inventing), inventors should use customized supercomputers to prototype, test and evolve aspects of their 30-year vision. Pulling back into the 10- to 15-year window brings inventors back from the “pie in the sky” to something more concrete.

Jumping into that “more concrete” future is exactly what Alan Kay did in 1971 when he joined the Xerox Palo Alto Research Center (PARC) effort to build “the office of the future.”

It started with Butler Lampson and Chuck Thacker, two of PARC’s leading engineers, asking Kay, “How would you like us to build your little machine?” The resulting computer was an “interim Dynabook,” as Kay thought of it, but better known as the Xerox Alto.

Xerox Alto

The Alto was the hardware equivalent of the Apple Macintosh of 1988, but running in 1973. Instead of costing a couple of thousand dollars each, the Alto cost about $70,000 (in today’s dollars). PARC built 2,000 of them — thereby providing Kay and his team with the environment to develop the software for a 15-year, lesser-but-running version of his 30-year vision.

6. Compute in the future

Now, having created the computing environment of the future, you can invent the software. This approach is critical because the hardest thing about software is getting from requirements and specification to properly running code.

Much of the time spent in developing software is spent optimizing code for the limitations of the hardware environment—i.e., making it run fast enough and robust enough. Providing a more powerful, unconstrained futuristic computing environment frees developers to focus on invention rather than optimization. (This was the impetus for another Kay principle, popularized by Steve Jobs, that “People who are really serious about software should make their own hardware.”)

The Alto essentially allowed PARC researchers to simulate the laptop of the future. Armed with it, Kay was a visionary force at PARC.

Kay led the Learning Research Group at PARC, and, though PARC’s mission was focused on the office environment, Kay rightly decided that the best path toward that mission was to focus on children in educational settings. He and his team studied how children could use personal computers in different subject areas. They studied how to help children learn to use computers and how children could use computers to learn. And, they studied how the computers needed to be redesigned to facilitate such learning.

Children With Xerox Alto

The power of the Alto gave Kay and his team, which included Adele Goldberg, Dan Ingalls, Ted Kaehler and Larry Tesler, the ability to do thousands of experiments with children in the process of understanding these questions and working toward better software to address them.

We could have a couple of pitchers of beer at lunch, come back, and play all afternoon trying out different user interface ideas. Often, we didn’t even save the code.

For another example of the “compute in the future” approach, take Google’s driverless car. Rather than using off-the-shelf or incrementally better car components, Google researchers used state of the art LIDAR, cameras, sensors and processors in its experimental vehicles. Google also built prototype vehicles from scratch, in addition to retrofitting current cars models. The research vehicles and test environments cost many times as much as standard production cars and facilities. But, they were not meant for production. Google’s researchers know that Moore’s Law and other favorable exponentials will soon make their research platforms practical.

Its “computing in the future” platforms allow Google to invent and test driving algorithms on car platforms of the future today. Google greatly accelerated the state of the art of driverless cars and ignited a global race to perfect the technology. Google recently spun off a separate company, Waymo, to commercialize the fruits of this research.

Waymo’s scientists and engineers are learning from a fleet of test vehicles driving 10,000 to 15,000 miles a week on public roads and interacting with real infrastructure, weather and traffic (including other drivers). The developers are also taking advantage of Google’s powerful cloud-based data and computing environment to do extensive simulation-based testing. Waymo reports that it is running its driving algorithms through more than three million miles of simulated driving each day (using data collected by its experimental fleet).

See also: How to Master the ABCs of Innovation  

7. Crawl your way there

Invention requires both inspiration and perspiration. Inspired by this alternative perspective of thinking about their work, researchers can much more effectively channel their perspiration. As Kay is known for saying, “Point of view is worth 80 IQ points.”

PARC’s success demonstrates that even if one pursues a 15-year vision — or, more accurately, because one pursues such a long-term vision — many interim benefits might well come of the effort. And, while the idea of giving researchers 2,000 supercomputers and building custom software environments might seem extravagant and expensive, it is actually quite cheap when you consider how much you can learn and invent.

Over five glorious years in the early 1970s, the work at PARC drove the evolution of much of future computing. The software environment advanced to become more user-friendly and supportive of communications and different kinds of media. This led to many capabilities that are de rigueur today, including graphical interfaces, high quality bit-mapped displays, what-you-see-is-what-you-get (WYSISYG) word processing and page layout applications. The hardware system builders learned more about what it would take to support future applications and also evolved accordingly. This led to hardware designs that better supported the display of information, network communications and connecting to peripherals, rather than being optimized for number crunching. Major advancements included Ethernet, laser printing, peer-to-peer and client server computing and internetworking.

Kay estimates that the total budget for the parts of Xerox PARC that contributed to these inventions was about $50 million in today’s dollars. Compare that number to the hundreds of billions of dollars that Xerox directly earned from the laser printer.

Xerox 9700 Printers

Although the exact number is hard to calculate, the work at PARC also unlocked trillions reaped by other technology-related businesses.

One of the most vivid illustrations of the central role that Xerox played was a years-later interchange between Steve Jobs and Bill Gates. In response to Jobs’ accusation that Microsoft was stealing ideas from the Mac, Gates tells him:

Well, Steve, I think there’s more than one way of looking at it. I think it’s more like we both had this rich neighbor named Xerox, and I broke into his house to steal the TV set and found out that you had already stolen it.

Kay cautions that his method is not a cookbook for invention. It is more like a power tool that needs to be wielded by skilled hands.

It is also a method that has been greatly enabled by Kay and his colleagues’ inventions. Beyond the technology industry that they helped spawned, their inventions also underscore discovery and innovation in every field of science and technology, including chemistry, biology, engineering, health and agriculture. Information technology is not only a great invention; it has reinvented invention. It powers the favorable exponential curves upon which other inventors can escape the present and invent the future.

See also: How We’re Wired to Make Bad Decisions

For his part, Kay continues to lead research at the frontiers of computing, with a continued emphasis on human advancement. In addition to his Viewpoints Research Institute, he recently helped to formulate the Human Advance Research Community (HARC) at YC Research, the non-profit research arm of Y Combinator. HARC’s mission is “to ensure human wisdom exceeds human power, by inventing technology that allows all humans to see further and understand more deeply.”

That is a future worth inventing.

To Shape the Future, Write Its History

“History will be kind to me, for I intend to write it myself.” — Winston Churchill

When it comes to large-scale innovation, my experience is that history will indeed be kinder if aspiring innovators take the time to write it themselves—but before it actually unfolds, not after.

Every ambitious strategy has multiple dimensions and depends on complex interactions between a host of internal and external factors. Success requires achieving clarity and getting everyone on the same page for the challenging transition to new business and operational models. The best mechanism for doing that is one I have used often, to powerful effect. I call it a “future history.”

Future histories fulfill our human need for narratives. As much as we like to think of ourselves as modern beings, we still have a lot in common with our earliest ancestors gathered around a fire outside a cave. We need stories to crystallize and internalize abstract concepts and plans. We need shared stories to unite us, and guide us toward a collective future.

Future histories provide that story for large organizations.

See also: What Is the Right Innovation Process?  

The CEO of a major financial services company occasionally still reads to internal audiences parts of the future histories that I helped him and his management team write in early 2011. He says they helped him get his team focused on the right opportunities. As of this writing, his company’s stock has almost doubled, even though his competitors have had problems.

To create future histories, I have executive teams imagine that they are five years in the future and ask them to write two memos of perhaps 750 to 1,000 words each.

For the first memo, I ask them to imagine that the strategy has failed because of some circumstance or because of resistance from some parts of the organization, investors, customers or other key stakeholder. The memo should explain the failure. The exercise lets people focus on the most critical assumptions and raise issues without being seen as naysayers. There is usually no lack of potential problems to consider, including technology developments, employee resistance, customer activities, competitors’ actions, governmental actions, substitute products and so on. Articulating the rationale for failure in a clearly worded memo crystallizes thinking about the most likely issues.

To heighten the effect, I sometimes do some formatting and structure the memo like an article from the Wall Street Journal or New York Times. Adopting a journalist’s voice helps to focus the narrative on the most salient points. And everybody hates the idea of being embarrassed in such publications, so readers of the memo pay attention to the potential problems while there’s still time to address them.

The second memo is the success story. What key elements and events helped the organization shake its complacency? What key strategic or technological shifts helped to capture disruptive opportunities? How did the organization’s unity help it to out-innovate existing players and start-ups? This part of the exercise encourages war-gaming and helps the executive team understand the milestones on the path to success.

Taken together, the future histories provide a new way of thinking about the long-term aspirations of the organization and the challenges facing it. By producing a chronicle of what could be the major success and most dreaded failures, the organization gains clarity about the levers it needs to pull to succeed and the pitfalls it needs to avoid.

Most importantly, by working together to write the future histories, the executive team develops a shared narrative of those potential futures. It forges alignment around the group’s aspirations, critical assumptions and interdependencies. The process of drafting and finalizing the future histories also prompts the team to articulate key questions and open issues. It drives consensus about key next steps and the overall change management road map. In a few weeks’ time, future histories can transform the contemplated strategy into the entire team’s strategy.

See also: How to Create a Culture of Innovation  

Future histories also facilitate the communication of that shared strategy to the rest of the organization. Oftentimes, senior executives extend the process to more layers of management to flesh out the success and failure scenarios in greater detail and build wider alignment.

Future histories take abstract visions and strategies and make them real, in ways that get people excited. They help people understand how they can contribute—how they must contribute—even if they aren’t directly involved in the innovation initiative. People can understand the timing and see how efforts will build.

People can also focus on the enemies that, as a group, they must fend off. These enemies may no longer be saber-toothed tigers, but they are still very real and dangerous to corporations. “Future histories” unite teams as they face the inevitable challenges.

Who Is Leading in Driverless Cars?

Imagine if you could pick between Uber drivers based on their driving experience. Would you hire an experienced driver who has logged hundreds of thousands of road miles or one who has driven just a few hundred miles? I’ll bet you’d go with the experienced driver.

Now apply the same question to driverless cars. How would you pick? The same logic applies: Go with experience.

By the miles-driven heuristic, recent reports released by the California Department of Motor Vehicles show that Waymo (the new Alphabet spinout previously known as Google’s Self-Driving Car program) is running laps around its competitors. As with human drivers, experience matters for driverless capabilities. That’s because the deep learning AI techniques used to train driverless cars depend on data—especially data that illuminates rare and dangerous “edge cases.” The more training data, the more confidence you can have in the results.

See also: How to Picture the Future of Driverless  

In 2016, Waymo logged more than 635,000 miles while testing its autonomous vehicles on California’s public roads compared to just over 20,000 for all its competitors combined.

As the W. Edwards Deming principle that is popular in Silicon Valley goes, “In God we trust, all others bring data.” The data shows that Waymo is not only 615,000 miles ahead of its competitors but that those competitors are still neophytes when it comes to proving their technology on real roads and interacting with unpredictable elements such as infrastructure, traffic and human drivers.

Now, there are lots of ways to cut the data and therefore a lot of provisos to the simple test-miles-driven heuristic.

Waymo also leads the others in terms of fewer “disengagements,” which refers to when human test drivers have to retake control from the driverless software. Waymo’s test drivers had to disengage 124 times, or about once very 5,000 miles.

Other companies were all over the map in terms of their disengagements. BMW had one disengagement during 638 total miles of testing. Tesla had 182 disengagements in 550 miles. Mercedes-Benz had 336 disengagements over 673 miles. Fewer miles might mean fewer edge cases were encountered, or it might mean that those companies tested particularly difficult scenarios. But, low total miles driven casts doubt on the readiness of any system for operating on public roads. Until other contenders ramp up their total miles by a factor or 1,000 or more, their disengagement statistics are not statistically relevant.

Tesla fans could rightly point to the more than two hundred million miles that Tesla owners have logged under Tesla’s Autopilot feature. Those miles are not considered here. (Autopilot is not defined as autonomous under California law, so Tesla is not required to report disengagements to the California DMV.) But, no doubt, all those miles means that Tesla’s Autopilot software is probably very well trained for highway driving.

What do those highway miles tell us about Tesla’s ability to handle city streets, which are more complex for driverless cars? Not much, but the 550 miles that Tesla did spend on public road autonomous testing speaks volumes about its dearth of experiential learning on city streets. (Ed Niedermeyer, an industry analyst, recently argued that most of Tesla’s 550 miles were probably logged while filming one marketing video.)

See also: Novel Solution for Driverless Risk  

It should also be noted that the reported data applies only to California; it does not account for testing in other active driverless hubs—such as Waymo’s test cars in Austin, TX, Uber’s driverless pilots in Pittsburgh or nuTonomy’s testing in Singapore (just to name a few). It is safe to guess, however, that a significant percentage of all autonomous testing has been logged in California.

Notably missing from the reports to the California DMV are all other Big Auto makers and suppliers—and other players cited or rumored as driverless contenders, like Apple and Baidu. They might well be learning to drive on private test tracks or outside of California. But, until they bring data about their performance after significant miles on public roads, don’t trust the press releases or rumors about their capabilities.

Waymo’s deep experience in California does not guarantee its victory. Can it stay ahead as others accelerate? That remains to be seen, but it is clear from the California DMV reports that Waymo is way ahead on the driverless learning curve.