Over the weekend, two articles made a compelling case that we need to better vet academic studies before they become set in the public consciousness on controversial topics like possible systemic racism and the coronavirus. Both recommended a solution that has been a focus of my career — devil’s advocates — and that we all should use as we formulate personal and corporate strategies in these turbulent times.
Let’s spend a minute on why they’re so important and how you can use them — rather easily, in fact.
The article related to the coronavirus argues that a serious attempt at research wasn’t vetted quickly enough and, when published in April, had obvious shortcomings that allowed many to believe that the virus wasn’t as dangerous as it has turned out to be. The one concerning a paper on possible systemic racism by police went through peer review, but the authors say the process isn’t designed to catch fraud and is vulnerable to rigging. In the case of the paper they discuss, a reader caught a major error shortly after publication, and the paper was withdrawn — but not before many used it to dismiss the notion of racism in policing.
Both articles obviously touch on hot buttons, and the specifics of the arguments about the research they discuss could distract from the point I want to make, so I won’t go into more detail. You can read the articles and reach your own conclusions. I’ll just note that both say problems would have been avoided if the virus and racism research had been put in front of devil’s advocates — people whose task is solely to identify potentially bad assumptions, in time to do something about them.
That need for devil’s advocates is a theme I’ve been sounding with corporate America for a dozen years and is especially important now. The New York Times and the Wall Street Journal ran articles recently saying that corporations are starting to believe both that the economic crisis caused by the pandemic will last longer than they had hoped and that the new normal will look quite different. So, a strategic rethink is happening all at once in a whole lot of C-suites, which creates opportunities both for progress and for mischief caused by bad assumptions — that devil’s advocates could head off.
My belief in the power of devil’s advocates dates back to a book, “Billion Dollar Lessons,” that Chunka Mui and I published in 2008, on the lessons to be learned from corporate failures. Out of the 750 major writeoffs that we spent two years investigating in detail, with the help of 20 researchers, we found that 46% stemmed from strategies that should have been identified ahead of time as brain-dead. Think Avon deciding that its main asset wasn’t its door-to-door sales force but was its “culture of caring,” which led the company to buy a medical equipment manufacturer and operator of retirement homes — then quickly selling them at a loss because the cosmetics company had no idea what to do with them. Or, think Blue Circle Cement, one of the world’s biggest cement companies, deciding that it was really a home products company and should make and sell lawn mowers, among many other things — then filing for bankruptcy protection and being acquired.
We posited in the book that loads of people internally must have seen the problems coming but couldn’t stop the strategies because of internal dynamics — for instance, the CEO is often the one championing a new strategy, so the tendency is to want to confirm the idea, not to challenge it. Our subsequent research and consulting, as devil’s advocates, has confirmed our thesis. (We’re not alone, either. Much has been written in recent years about the value of a devil’s advocate, sometimes referred to as a red team/blue team exercise.)
The key issue is: How do you identify problems in a way that’s acceptable within the complex culture of a C-suite? How do you help the company win without making some powerful individual lose — or see the devil’s advocate process quashed if it looks like the CEO will be the loser?
The main answer is to turn the devil’s advocate process into a bloodless exercise. You don’t give the devil’s advocate the power to rule on whether a strategy is right or even to hazard an opinion. The decision needs to stay with the CEO. You simply have the devil’s advocate interview senior executives to probe for vulnerabilities, then use the concerns to identify the assumptions that have to be true for a strategy to succeed. Because the CEO has authorized the process, he or she can face the evidence and kill the strategy without losing face. If the decision is to proceed, the CEO will have a better idea about the pitfalls that may lie ahead.
Choosing a devil’s advocate can be tricky. You can hire an outsider, who will bring objectivity but may take time to get up to speed. You can ask for a volunteer among senior insiders, but few want to be known as the naysayer, at least on more than a one-time basis. It seems to work best to designate an insider, so the whole team knows that the person is simply playing a role. (Irving Janis, in his pioneering 1982 book “Groupthink,” described how President Kennedy designated his brother Bobby to be the devil’s advocate after the administration had botched the Bay of Pigs invasion; Bobby then routinely challenged claims by military leaders during the Cuban missile crisis and may well have saved the world from nuclear war. Quite the endorsement for a designated devil’s advocate….)
As insurers reformulate strategies to prepare for what could be an extended economic crisis and for a rather different world on the other side of it, they should build a devil’s advocate into the process. Companies are making a lot of assumptions, many of which they don’t even know they’re making or made long enough ago that the assumptions have been forgotten. Some of those assumptions are wrong — and many senior executives either know or suspect which ones should be challenged and rethought. (If I had to bet, the biggest mistake that companies in general will make in this go-’round is to underestimate what competitors are doing. The tendency is to see competitors as static, but they’re working just as hard and perhaps as creatively in their strategy rooms as you are in yours.)
By the way, a devil’s advocate approach can help you get better feedback on personal issues, just by having you rephrase questions. Don’t ask a friend or family member if some plan of yours is a good idea. They’ll know you want affirmation and give it to you. Instead, present a plan neutrally, say you’re looking for holes in the idea and ask your friend or relative to help you identify the potential problems. Then, on your own, you can weigh those concerns against the benefits that you’ve already seen.
Knowing about pitfalls won’t always matter. I consistently underestimate how long it will take me to write something, even though I allow for the fact that I always underestimate. But at least a devil’s advocate process will open your eyes to many of the problems that lie in wait out there.
So, challenge those assumptions.
And stay safe.
P.S. Here are the six articles I’d like to highlight from the past week:
Whether you are part of building a modern digital enterprise platform for mid-sized to large insurance companies or part of a startup that distinguishes itself through innovative technologies, you are likely to be hearing about microservices.
Microservices architecture has increasingly become popular and often associated with benefits such as scale, speed of change, ease of integration, fault tolerance and ability to adapt to changing business demands and models. Commitment from digital giants such as Amazon, Netflix, PayPal, eBay, Twitter and Uber, which built and scaled their platforms based on microservices architecture, has galvanized adoption across many industries.
Source: Google Trends
A crucial question is, “How will microservices help insurers design open platforms for building sustainable competitive advantage?”
This four-part blog series will share our views based on our experience in building a modern digital platform using microservices. This first blog will provide a general primer about microservices. The second will share our view on the applicability and strategic potential of microservices for insurance. The third will illustrate best practices and applied principles of designing a microservices-based platform. The final blog will share how our innovative Majesco Digital1st platform will help insurers simplify and accelerate the development of microservices apps.
Let’s start with the basic question, “What are microservices?” You can find the answer through a simple Google search, but let’s explain it in simple terms. Think of a microservice as a micro application that enables a specific granular business function like payment, issue, policy documents, first notice of loss (FNOL), etc. The micro application can be independently deployed and can communicate with other micro applications serving other business functions through a well-defined interface. This approach is in stark contrast to “monolith applications,” such as policy management systems, billing systems and claims systems that work as an aggregation of multiple business functions tightly woven together and must be deployed as a large, monolithic unit.
An architectural pattern called self-contained-service (SCS) is often discussed along with microservices but does not provide the full benefits of microservices. The SCS pattern recommends putting cohesive services together as a self-contained, individually deployable unit. Because the individual services are no longer self-contained and individually deployable, they cannot be considered microservices. While this approach is better than the monolithic application, it is instead building multiple small monoliths!
So why does anyone advocate the microservices approach? Simply put, it addresses the issues of monolith architectures that inhibit digital models. Even after functional decomposition and the use of several deployment artifacts with monolith architectures, they are still part of a single code base that must be managed as a single deployment unit.
In contrast, a microservices architecture has the following advantages when done well:
Velocity and Agility – Maintenance and evolution of monolith applications is expensive and slow due to inadvertent side effects, because they affect other functions and services. Dealing with the side effects requires additional work, including vital tasks such as impact analysis, elaborate and expensive testing and forcing changes into large and infrequent releases to optimize testing efforts. In contrast, a microservice is a low-impact, single-responsibility business function that performs its own individual tasks, manages its own data and communicates with other microservices through a well-defined interface. It allows you to make and deploy changes reliably, incrementally and more quickly, in contrast to monolith architectures.
Scale – Microservices allow easy monitoring that can predict seasonal or unique business demands on a business function. Because each microservice runs in its own process, it can easily be scaled with elastic containers, which efficiently scale up and down. In comparison, a monolith architecture runs multiple business functions under a single process, making it harder to orchestrate the feeding of resources to targeted business functions.
Decentralized Governanceand Teams – The separated code base of microservices allows different parts of an organization to build business functions as opposed to a centralized large team. Each team can manage different microservices with full DevOps (development and operations) responsibility and accountability. This gives insurers the freedom to choose the technology best-suited for the business function.
Self-Contained and Sustainable – With monolithic applications, when introducing a new business capability that requires the upgrade of external dependencies (OS, shared libraries, etc.) the entire application must be tested. In contrast, microservices are self-contained from OS down to the actual code required for implementation. This enables microservices to separately and individually upgrade without affecting unrelated application functions based on business/operational needs. This keeps the application stack relevant and avoids the risk of running applications on an obsolete technology stack.
Hypothesis-Driven Development – The advantages outlined above lead to a completely different way of contemplating software development. The focus and conversation shifts from managing projects and defect backlogs to emphasizing new opportunities, experimentation and observing the application usage. Experimental software changes can be built and deployed quicker in small increments into production. When errors happen, they can be fixed in minutes and hours, rather than days or months. For major problems, the incremental functionality upgrade can quickly and easily be rolled back without loss of major functionality or downtime.
As with all innovation, there is a flip side to the coin. Unfortunately, not all organizations are ready to adopt a microservices architecture immediately. In particular, if a company cannot build a well-designed monolith, then building a microservices platform will be much harder. Microservices architecture is inherently complex to develop as well as operate, but the rewards of the complexity are worth the hurdles, because microservices will give the reconstructed organization far greater efficiency and capabilities focused on the future.
Fundamentally, microservices require organizational change, not just adoption of a technology pattern. Organizations must rethink end-to-end DevOps by thinking in terms of small business functions, distributed teams, decentralized governance and continuous delivery. In addition, the organization must embrace multiple technologies suited for a business platform rather than a single technology platform, which is a significant change for organizations schooled in building applications using traditional software development processes.
Even success stories like Amazon and Netflix did not start with a microservices architecture; rather, they evolved overtime as they matured. If you are building a MVP (minimum viable product) as a startup, it may not be advisable to delay market launch due to the large up-front effort of establishing microservices. However, startups should consider that at some point they’ll have to invest and migrate to microservices to support scalability and changing business models.
Operating a platform made of hundreds or thousands of microservices, while enabling scalability and growing business demands, does create tremendous complexity for deployment, auto-scaling, monitoring, logging and many other DevOps aspects. Microservices deployment at Amazon and Netflix (Images by AppCentrica) show the complexity of managing a reliable business operation with millions of continuing deployments within an ecosystem of microservices — often written using different languages and databases. Companies like Amazon and Netflix deal with this complexity through a high degree of automation and significant investment into sharing and automating the infrastructure to build resiliency.
Despite the complexity in managing microservices, separation of responsibilities across microservices offers organizations significant benefits in today’s platform economy. We outline these in our thought leadership report, Cloud Business Platform: The Path to Digital Insurance 2.0. The constant pivoting of business priorities requires a continuous and high degree of system changes that enable new strategies. Microservices can bring great value to agility, velocity, availability, scalability and accountability across both technical and business organizational dimensions.
We believe that every organization should exercise patient urgency, which author and futurist Chunka Mui describes as “thecombination of foresight to prepare for a big idea, willingness to wait for the right market conditions and agility to act straight away when conditions ripen.”
We look forward to covering our views on the role of microservices in insurance in Part 2. Please share your views on this exciting topic in the comments section. We would enjoy hearing your perspective.
This article was written by Manish Shah and Sachin Dhamane.
Urmson’s recent “Perspectives on Self-Driving Cars” lecture at Carnegie Mellon was particularly interesting because he has had time to absorb the lessons from his long tenure at Google and translate those into his next moves at Aurora. He was also in a thoughtful space at his alma mater, surrounded by mentors, colleagues and students. And, it is early enough in his new startup’s journey that he seemed truly in “perspective” rather than “pitch” mode.
1. There is a lot more chaos on the road than most recognize.
Much of the carnage due to vehicle accidents is easy to measure. In 2015, in just the U.S., there were 35,092 killed and 2.4 million injured in 6.3 million police-reported vehicle accidents. Urmson estimates, however, that the real accident rate is really between two and 10 times greater.
Over more than two million test miles during his Google tenure, Google’s SDCs were involved in about 25 accidents. Most were not severe enough to warrant a regular police report (they were reported to the California DMV). The accidents mostly looked like this: “Self-driving car does something reasonable. Comes to a stop. Human crashes into it.” Fender bender results.
While we talk a lot about fatalities or police-reported accidents, Urmson said, “there is a lot of property damage and loss that can be cleaned up relatively easily” with driverless technology.
2. Human intent is the fundamental challenge for driverless cars.
The choices made by driverless cars are critically dependent on understanding and matching the expectations of human drivers. This includes both humans in operational control of the cars themselves and human drivers of other cars. For Urmson, the difficulty in doing this is “the heart of the problem” going forward.
To illustrate the “human factors” challenge, Urmson dissected three high-profile accidents. (He cautioned that, in the case of the Uber and Tesla crashes, he had no inside information and was piecing together what probably happened based on public information.)
Google Car Crashes With Bus; Santa Clara Transportation Authority
In the only accident where Google’s SDC was partially at fault, Google’s car was partially blocking the lane of a bus behind it (due to sand bags in its own lane). The car had to decide whether to wait for the bus to pass or merge fully into the lane. The car predicted that the remaining space in the bus’s lane was too narrow and that the bus driver would have to stop. The bus driver looked at the situation and thought “I can make it,” and didn’t stop. The car went. The bus did, too. Crunch.
The Uber SDC was in the leftmost lane of three lanes. The traffic in the two lanes to its right were stopped due to congested traffic. The Uber car’s lane was clear, so it continued to move at a good pace.
A human driver wanted to turn left across the three lanes. The turning car pulled out in front of the cars in the two stopped lanes. The driver probably could not see across the blocked lanes to the Uber car’s lane and, given the stopped traffic, expected that whatever might be driving down that lane would be moving slower. It pulled into the Uber car’s lane to make the turn, and the result was a sideways parked car.
The driver had been using Tesla’s Autopilot for a long time, and he trusted it—despite Tesla saying, “Don’t trust it.” Tesla user manuals told drivers to keep their hands on the wheel, eyes in front, etc. The vehicle was expecting that the driver was paying attention and would act as the safety check. The driver thought that Autopilot worked well enough on its own. A big truck pulled in front of the car. Autopilot did not see it. The driver did not intervene. Fatal crash.
Tesla, to its credit, has made modifications to improve the car’s understanding about whether the driver is paying attention. To Urmson, however, the crash highlights the fundamental limitation of relying on human attentiveness as the safety mechanism against car inadequacies.
3. Incremental driver assistance systems will not evolve into driverless cars.
Urmson characterized “one of the big open debates” in the driverless car world as between Tesla’s (and other automakers’) vs. Google’s approach. The former’s approach is “let’s just keep on making incremental systems and, one day, we’ll turn around and have a self-driving car.” The latter is “No, no, these are two distinct problems. We need to apply different technologies.”
Urmson is still “fundamentally in the Google camp.” He believes there is a discrete step in the design space when you have to turn your back on human intervention and trust the car will not have anyone to take control. The incremental approach, he argues, will guide developers down a selection of technologies that will limit the ability to bridge over to fully driverless capabilities.
4. Don’t let the “Trolley Car Problem” make the perfect into the enemy of the great.
The “trolley car problem” is a thought experiment that asks how driverless cars should handle no-win, life-threatening scenarios—such as when the only possible choices are between killing the car’s passenger or an innocent bystander. Some argue that driverless cars should not be allowed to make such decisions.
Urmson, on the other hand, described this as an interesting philosophical problem that should not be driving the question of whether to bring the technology to market. To let it do so would be “to let the perfect be the enemy of the great.”
Urmson offered a two-fold pragmatic approach to this ethical dilemma. First, cars should never get into such situations. “If you got there, you’ve screwed up.” Driverless cars should be conservative, safety-first drivers that can anticipate and avoid such situations. “If you’re paying attention, they don’t just surprise and pop out at you,” he said. Second, if the eventuality arose, a car’s response should be predetermined and explicit. Tell consumers what to expect and let them make the choice. For example, tell consumers that the car will prefer the safety of pedestrians and will put passengers at risk to protect pedestrians. Such an explicit choice is better than what occurs with human drivers, Urmson argues, who react instinctually because there is not enough time to make any judgment at all.
5. The “mad rush” is justified.
Urmson reminisced about the early days when he would talk to automakers and tier 1 suppliers about the Google program and he “literally got laughed at.” A lot has changed in the last five years, and many of those skeptics have since invested billions in competing approaches.
Urmson points to the interaction between automation, environmental standards, electric vehicles and ride sharing as the driving forces behind the rush toward driverless. (Read more about this virtuous cycle.) Is it justified? He thinks so, and points to one simple equation to support his position:
3 Trillion VMT * $0.10 per mile = $300B per year
In 2016, vehicles in the U.S. traveled about 3.2 trillion miles. If you could bring technology to bear to reduce the cost or increase the quality of those miles and charge 10 cents per mile, that would add up to $300 billion in annual revenue—just in the U.S.
This equation, he points out, is driving the market infatuation with Transportation as a Service (TaaS) business models. The leading contenders in the emerging space, Uber, Lyft and Didi, have a combined market valuation of about $110 billion—roughly equal to the market value of GM, Ford and Chrysler. Urmson predicts that one of these clusters will see its market value double in the next four years. The race is to see who reaps this increased value.
To the inevitable question of “when,” Urmson is very optimistic. He predicts that self-driving car services will be available in certain communities within the next five years.
You won’t get them everywhere. You certainly are not going to get them in incredibly challenging weather or incredibly challenging cultural regions. But, you’ll see neighborhoods and communities where you’ll be able to call a car, get in it, and it will take you where you want to go.
Then, over the next 20 years, Urmson believes we’ll see a large portion of the transportation infrastructure move over to automation.
Urmson concluded his presentation by calling it an exciting time for roboticists. “It’s a pretty damn good time to be alive. We’re seeing fundamental transformations to the structure of labor and the structure transportation. To be a part of that and have a chance to be involved in it is exciting.”
Alan Kay is widely known for the credo, “The best way to predict the future is to invent it.” For him, the phrase is not just a witty quip; it is a guiding principle that has yielded a long list of accomplishments and continues to shape his work.
Kay was a ringleader of the exceptional group of ARPA-inspired scientists and engineers that created an entire genre of personal computing and pervasive world-wide networking. Four decades later, most of the information-technology industry and much of global commerce depends on this community’s inventions. Technology companies and many others in downstream industries have collectively realized trillions of dollars in revenues and tens of trillions in market value because of them.
Alan Kay made several fundamental contributions, including personal computers, object-oriented programming and graphical user interfaces. He was also a leading member of the Xerox PARC community that actualized those concepts and integrated them with other seminal developments, including the Ethernet, laser printing, modern word processing, client-servers and peer-peer networking. For these contributions, both the National Academy of Engineering and the Association of Computing Machinery have awarded him their highest honors.
I’ve worked with Alan to help bring his insights into the business realm for more than three decades. I also serve on the board of Viewpoints Research Institute, the nonprofit research organization that he founded and directs. Drawing on these vantage points and numerous conversations, I’ll try capture his approach to invention. He calls it a method for “escaping the present to invent the future,” and describes it in seven steps:
Smell out a need
Apply favorable exponentials
Project the need 30 years out, imagining what might be possible in the context of the exponential curves
Create a 30-year vision
Pull the 30-year vision back into a more concrete 10- to 15-year vision
Compute in the future
Crawl your way there
Here’s a summary of each step:
1. Smell out a need
“Everybody loves change, except for the change part,” Kay observes. Because the present is so vivid and people have heavy incentives to optimize it, we tend to fixate on future scenarios that deliver incremental solutions to existing problems. To reach beyond the incremental, the first step to inventing the future is deep “problem finding,” rather than short-term problem solving. Smell out a need that is trapped by incremental thinking.
In Alan’s case, the need that he sensed in the late ’60s was the potential for computers to redefine the context of how children learn. Prompted by conversations with Seymour Papert at MIT and inspired by the work of Ivan Sutherland, J.C.R. Licklider, Doug Engelbart and others in the early ARPA community, Kay realized that every child should have a computer that helps him or her learn. Here’s how he described the insight:
It was like a magnet on the horizon. I had a lot of ideas but no really cosmic ones until that point.
This led Kay to wonder how computers could form a new kind of reading and writing medium that enabled important and powerful ideas to be discussed, played with and learned. But, the hottest computers at the time were IBM 360 mainframes costing millions. The use of computers in educating children was almost nonexistent. And, there were no such things as personal computers.
2. Apply favorable exponentials
To break the tyranny of current assumptions, identify exponential improvements in technological capabilities that could radically alter the range of possible approaches.
In 1965, Gordon Moore made his observation that computing would dramatically increase in power, and decrease in relative cost, at an exponential pace. Moore’s prediction, which would become known as Moore’s Law, was the “favorable exponential” that Kay applied.
Today, the fruits of Moore’s Law such as mobile devices, social media, cloud computing, big data, artificial intelligence and the Internet of Things continue to offer exponential advances favorable for invention. As I’ve previously written, these are make-or-break technologies for all information-intensive companies. But, don’t limit yourself to those.
Kay is especially optimistic about the favorable exponential at the intersection of computer-facilitated design, simulation and fabrication. This is the process of developing concepts and ideas using computer design tools and then testing and evolving them using computer-based simulation tools. Only after extensive testing and validation are physical components ever built, and, when they are, it can be done through computer-mediated fabrication, including 3D printing.
This approach applies to a wide range of domains, including mechanical, electrical and biological systems. It is becoming the standard method for developing everything, including car parts and whole cars, computer algorithms and chips, and even beating nature at its own game. Scientists and engineers realize tremendous benefits in terms of the number of designs that can be considered and the speed and rigor with which they can do so. These allow, Kay told me, “unbelievable leverage on the universe.”
3. Project the need 30 years out and imagine what might be possible in the context of the exponential curves
30 years is so far in the future that you don’t have to worry about how to get out there. Focus instead on what is important to have. There’s no possibility of being forced to demonstrate or prove how to get there incrementally.
Asking “How is this incremental to the present?” is the “biggest idea killer of all time,” Kay says. The answer to the “incremental” question is, he says, is “Forget it. The present is the least interesting time to live in.”
Instead, by projecting 30 years into the future, the question becomes, “Wouldn’t it be ridiculous if we didn’t have this?”
Projecting out what would be “ridiculous not to have” in 30 years led to many visionary concepts that earned Kay wide recognition as “the father of the personal computer.” He was sure, for example, that children would have ready access to laptop and tablets by the late 1990s — even though personal computers did not yet exist. As he saw it, there was a technological reason for it, there were user reasons for it and there were educational reasons for it. All those factors contributed to his misty vision, and he didn’t have to prove it because 30 years was so far in the future.
How might the world look relative to the needs that you smell out? What will you have ready access to in a world with a million times greater computing power, cheap 3D fabrication, boundless energy and so on? Remember, projecting to 2050 is intended as a mind-stretching exercise, not a precise forecasting one. This is where romance lives, albeit romance underpinned by deep science rather than pure fantasy.
4. Create a 30-year vision
A vision is different from a mission or a goal. If the previous step was about romance, a 30-year vision is more like a dream. It is a vague picture of a desirable future state of affairs in that 30-year future. This is the step where Kay’s recognition that computers would be widely available by the late 1990s turned into a vision of what form those computers might take.
That vision included the Dynabook, a powerful and portable electronic device the size of a three-ring notebook with a touch-sensitive liquid crystal screen and a keyboard for entering information. Here’s one of Kay’s early sketches of the Dynabook from that time.
DynaBook Concept Drawing
The next illustration is Kay’s sketch of the Dynabook in use. He describes the scenario as two 12-year-olds learning about orbital dynamics from a version of “Space Wars” that they wrote themselves. They are using two personal Dynabooks connected over a wireless network.
Children Using Dynabooks
Kay’s peers in the ARPA community had already envisioned some of the key building blocks for the Dynabook, such as LCD panels and an Internet-like, worldwide, self-healing network. (For a fascinating history of the early ARPA community, see Mitchell Waldrop’s brilliant book, “The Dream Machine.“)
For Kay, these earlier works crystallized into the Dynabook once he thought about them in the context of children’s education. As he described it,
The Dynabook was born when it had that cosmic purpose.
Laptops, notebook computers and tablets have roots in the early concepts of the Dynabook.
5. Pull the 30-year vision back into a 10- to 15-year lesser vision
Kay points out that one of the powerful aspects of computing is that, if you want to live 10 to 15 years in the future, you can do it. You just have to pay 10 to 20 times as much. That’s because tomorrow’s everyday computers can be simulated using today’s supercomputers. Instead of suffering the limitations of today’s commodity computers (which will be long obsolete before you get to the future you are inventing), inventors should use customized supercomputers to prototype, test and evolve aspects of their 30-year vision. Pulling back into the 10- to 15-year window brings inventors back from the “pie in the sky” to something more concrete.
It started with Butler Lampson and Chuck Thacker, two of PARC’s leading engineers, asking Kay, “How would you like us to build your little machine?” The resulting computer was an “interim Dynabook,” as Kay thought of it, but better known as the Xerox Alto.
The Alto was the hardware equivalent of the Apple Macintosh of 1988, but running in 1973. Instead of costing a couple of thousand dollars each, the Alto cost about $70,000 (in today’s dollars). PARC built 2,000 of them — thereby providing Kay and his team with the environment to develop the software for a 15-year, lesser-but-running version of his 30-year vision.
6. Compute in the future
Now, having created the computing environment of the future, you can invent the software. This approach is critical because the hardest thing about software is getting from requirements and specification to properly running code.
Much of the time spent in developing software is spent optimizing code for the limitations of the hardware environment—i.e., making it run fast enough and robust enough. Providing a more powerful, unconstrained futuristic computing environment frees developers to focus on invention rather than optimization. (This was the impetus for another Kay principle, popularized by Steve Jobs, that “People who are really serious about software should make their own hardware.”)
The Alto essentially allowed PARC researchers to simulate the laptop of the future. Armed with it, Kay was a visionary force at PARC.
Kay led the Learning Research Group at PARC, and, though PARC’s mission was focused on the office environment, Kay rightly decided that the best path toward that mission was to focus on children in educational settings. He and his team studied how children could use personal computers in different subject areas. They studied how to help children learn to use computers and how children could use computers to learn. And, they studied how the computers needed to be redesigned to facilitate such learning.
Children With Xerox Alto
The power of the Alto gave Kay and his team, which included Adele Goldberg, Dan Ingalls, Ted Kaehler and Larry Tesler, the ability to do thousands of experiments with children in the process of understanding these questions and working toward better software to address them.
We could have a couple of pitchers of beer at lunch, come back, and play all afternoon trying out different user interface ideas. Often, we didn’t even save the code.
For another example of the “compute in the future” approach, take Google’s driverless car. Rather than using off-the-shelf or incrementally better car components, Google researchers used state of the art LIDAR, cameras, sensors and processors in its experimental vehicles. Google also built prototype vehicles from scratch, in addition to retrofitting current cars models. The research vehicles and test environments cost many times as much as standard production cars and facilities. But, they were not meant for production. Google’s researchers know that Moore’s Law and other favorable exponentials will soon make their research platforms practical.
Its “computing in the future” platforms allow Google to invent and test driving algorithms on car platforms of the future today. Google greatly accelerated the state of the art of driverless cars and ignited a global race to perfect the technology. Google recently spun off a separate company, Waymo, to commercialize the fruits of this research.
Waymo’s scientists and engineers are learning from a fleet of test vehicles driving 10,000 to 15,000 miles a week on public roads and interacting with real infrastructure, weather and traffic (including other drivers). The developers are also taking advantage of Google’s powerful cloud-based data and computing environment to do extensive simulation-based testing. Waymo reports that it is running its driving algorithms through more than three million miles of simulated driving each day (using data collected by its experimental fleet).
Invention requires both inspiration and perspiration. Inspired by this alternative perspective of thinking about their work, researchers can much more effectively channel their perspiration. As Kay is known for saying, “Point of view is worth 80 IQ points.”
PARC’s success demonstrates that even if one pursues a 15-year vision — or, more accurately, because one pursues such a long-term vision — many interim benefits might well come of the effort. And, while the idea of giving researchers 2,000 supercomputers and building custom software environments might seem extravagant and expensive, it is actually quite cheap when you consider how much you can learn and invent.
Over five glorious years in the early 1970s, the work at PARC drove the evolution of much of future computing. The software environment advanced to become more user-friendly and supportive of communications and different kinds of media. This led to many capabilities that are de rigueur today, including graphical interfaces, high quality bit-mapped displays, what-you-see-is-what-you-get (WYSISYG) word processing and page layout applications. The hardware system builders learned more about what it would take to support future applications and also evolved accordingly. This led to hardware designs that better supported the display of information, network communications and connecting to peripherals, rather than being optimized for number crunching. Major advancements included Ethernet, laser printing, peer-to-peer and client server computing and internetworking.
Kay estimates that the total budget for the parts of Xerox PARC that contributed to these inventions was about $50 million in today’s dollars. Compare that number to the hundreds of billions of dollars that Xerox directly earned from the laser printer.
Xerox 9700 Printers
Although the exact number is hard to calculate, the work at PARC also unlocked trillions reaped by other technology-related businesses.
One of the most vivid illustrations of the central role that Xerox played was a years-later interchange between Steve Jobs and Bill Gates. In response to Jobs’ accusation that Microsoft was stealing ideas from the Mac, Gates tells him:
Well, Steve, I think there’s more than one way of looking at it. I think it’s more like we both had this rich neighbor named Xerox, and I broke into his house to steal the TV set and found out that you had already stolen it.
Kay cautions that his method is not a cookbook for invention. It is more like a power tool that needs to be wielded by skilled hands.
It is also a method that has been greatly enabled by Kay and his colleagues’ inventions. Beyond the technology industry that they helped spawned, their inventions also underscore discovery and innovation in every field of science and technology, including chemistry, biology, engineering, health and agriculture. Information technology is not only a great invention; it has reinvented invention. It powers the favorable exponential curves upon which other inventors can escape the present and invent the future.
For his part, Kay continues to lead research at the frontiers of computing, with a continued emphasis on human advancement. In addition to his Viewpoints Research Institute, he recently helped to formulate the Human Advance Research Community (HARC) at YC Research, the non-profit research arm of Y Combinator. HARC’s mission is “to ensure human wisdom exceeds human power, by inventing technology that allows all humans to see further and understand more deeply.”
“History will be kind to me, for I intend to write it myself.” — Winston Churchill
When it comes to large-scale innovation, my experience is that history will indeed be kinder if aspiring innovators take the time to write it themselves—but before it actually unfolds, not after.
Every ambitious strategy has multiple dimensions and depends on complex interactions between a host of internal and external factors. Success requires achieving clarity and getting everyone on the same page for the challenging transition to new business and operational models. The best mechanism for doing that is one I have used often, to powerful effect. I call it a “future history.”
Future histories fulfill our human need for narratives. As much as we like to think of ourselves as modern beings, we still have a lot in common with our earliest ancestors gathered around a fire outside a cave. We need stories to crystallize and internalize abstract concepts and plans. We need shared stories to unite us, and guide us toward a collective future.
Future histories provide that story for large organizations.
The CEO of a major financial services company occasionally still reads to internal audiences parts of the future histories that I helped him and his management team write in early 2011. He says they helped him get his team focused on the right opportunities. As of this writing, his company’s stock has almost doubled, even though his competitors have had problems.
To create future histories, I have executive teams imagine that they are five years in the future and ask them to write two memos of perhaps 750 to 1,000 words each.
For the first memo, I ask them to imagine that the strategy has failed because of some circumstance or because of resistance from some parts of the organization, investors, customers or other key stakeholder. The memo should explain the failure. The exercise lets people focus on the most critical assumptions and raise issues without being seen as naysayers. There is usually no lack of potential problems to consider, including technology developments, employee resistance, customer activities, competitors’ actions, governmental actions, substitute products and so on. Articulating the rationale for failure in a clearly worded memo crystallizes thinking about the most likely issues.
To heighten the effect, I sometimes do some formatting and structure the memo like an article from the Wall Street Journal or New York Times. Adopting a journalist’s voice helps to focus the narrative on the most salient points. And everybody hates the idea of being embarrassed in such publications, so readers of the memo pay attention to the potential problems while there’s still time to address them.
The second memo is the success story. What key elements and events helped the organization shake its complacency? What key strategic or technological shifts helped to capture disruptive opportunities? How did the organization’s unity help it to out-innovate existing players and start-ups? This part of the exercise encourages war-gaming and helps the executive team understand the milestones on the path to success.
Taken together, the future histories provide a new way of thinking about the long-term aspirations of the organization and the challenges facing it. By producing a chronicle of what could be the major success and most dreaded failures, the organization gains clarity about the levers it needs to pull to succeed and the pitfalls it needs to avoid.
Most importantly, by working together to write the future histories, the executive team develops a shared narrative of those potential futures. It forges alignment around the group’s aspirations, critical assumptions and interdependencies. The process of drafting and finalizing the future histories also prompts the team to articulate key questions and open issues. It drives consensus about key next steps and the overall change management road map. In a few weeks’ time, future histories can transform the contemplated strategy into the entire team’s strategy.
Future histories also facilitate the communication of that shared strategy to the rest of the organization. Oftentimes, senior executives extend the process to more layers of management to flesh out the success and failure scenarios in greater detail and build wider alignment.
Future histories take abstract visions and strategies and make them real, in ways that get people excited. They help people understand how they can contribute—how they must contribute—even if they aren’t directly involved in the innovation initiative. People can understand the timing and see how efforts will build.
People can also focus on the enemies that, as a group, they must fend off. These enemies may no longer be saber-toothed tigers, but they are still very real and dangerous to corporations. “Future histories” unite teams as they face the inevitable challenges.