Spies and “bugs” have made frequent appearances in movies, books and television. In the James Bond movie series, we see an array of devices that were designed for 007 by “Q.” In the 1997 movie, Tomorrow Never Dies, Bond’s BMW car and mobile phone provide the first glimpses of the potential of the Internet of Things (IoT). He remotely starts and drives the vehicle to escape the villains, while operating a number of built-in devices from the phone as the car views and senses issues. Q was always on the leading edge of new technology for Bond.
Fast forward 20 years, and we now have sensors and capabilities in so many things … in our appliances, automobiles, mobile phones and a host of common wearables. You may not think of these as “bugs,” but they are. They are mini- and micro-technology components employed to see, listen, learn, assess and respond. The only difference between today’s sensors and yesterday’s is that today’s sensors are infinitely better at reading and recording data — and they may be used for the common good.
To prove that they are still considered “bugs,” however, you only need to look at a bill introduced recently by U.S. Sens. Mark Warner (VA) and Cory Gardner (CO). The Internet of Things Cyber Security Improvement Act is aimed to protect the federal government from cyber intrusion through the Internet of Things. Their bill raises a great point — sensors need built-in security measures that will allow for the good features to be used without introducing new risks.
In the insurance industry, we understand the implications of sensors and their ability to lower risk. “Bugs” and sensors are now our best friends. In our Future Trends 2017: The Shift Gains Momentum report, we examined how IoT experimentation and implementation is reaching into every area of insurance. Here is a short list of innovative ideas introduced by early adopters of IoT in insurance:
Progressive, via the Snapshot usage-based-insurance telematics offering, monitored how customers drove using an OBD plug-in device from Zubie.
Liberty Mutual partnered with Google to use NEST connected smoke alarms in the home to help customers reduce fire risk and carbon monoxide poisoning while also reducing their homeowners insurance premium.
Beam Dental began pricing dental insurance based on smart toothbrush usage data.
John Hancock used wearable devices to track the well-being of customers, lowering life insurance premiums and offering an incentive program through Vitality to shop for an array of things.
Oscar, a health insurance startup, used wearable fitness trackers and a mobile app to help track and encourage members to be fit, find doctors, access health history, access the doctor on call and connect to Apple Health.
In addition to the last two examples above, companies are using wearable devices and the data generated from them to better assess individuals for healthcare, life insurance, workers compensation and investment rewards based on their activity and lifestyle. Innovative insurers are using wearables to provide improved underwriting discounts, rewards, claims monitoring and new services using real-time data. The new services can include advice on healthy living, real-time healthcare and prevention, real-time monitoring and assistance in treatment or recovery plans and determining return to work timeframes for injuries or other health-related incidents. These all contribute to enhanced customer experiences, longer customer lives and improved insurer investment options.
There’s No Limit to Sensor Growth
This rapid experimentation and use of IoT isn’t just limited to wearables, telematics and smoke detectors. Sensors of all kinds are being born into healthcare environments, construction sites, commercial buildings, roads and bridges, homes and cars.
By 2025, the Internet of Things will be worth trillions annually.
Connected homes will grow rapidly by 30% per year in the U.S. alone, where 22% of households now have at least one connected device.
The wearable device market is expected to more than double over the next five years.
Sensors Should Reduce Claims
With the proliferation of companies innovating and taking new offerings to market using IoT, we are seeing the beginning of a huge boom in insurers using IoT to drive an engaging customer experience through personalized insurance offerings, reduced costs and new value-added services. The Boston Consulting Group estimated that U.S. insurers could reduce annual claims by 40% to 60% with real-time IoT. The key is that insurers will be able to move from paying claims to mitigating or eliminating risk by engaging with customers via IoT devices while also enhancing the customer experience.
What’s Next for the IoT? Better bugs?
Though so much remains uncertain and untested, we should expect to see a rapid evolution of technologies to sort out which sensors are most valuable in which locations and just how IoT can bring cost-effective monitoring to market.
For example, P&C insurers were quick to pick up on OBD technology, with installed devices in vehicles. In many cases, mobile phone monitoring soon became a more cost-effective solution. Most smart phones have GPS capability and an accelerometer. And now automotive manufacturers are embedding sensors and telematics in vehicles to enhance safety and position themselves toward autonomous driving vehicles – just like Bond.
As some wearable technologies are dropping out of the running, life and health insurers will soon be taking advantage of advancements in smart watch design. The first wave of wearables looked like digital tech devices with touchscreens and LED displays. The next wave is the introduction of smart tech into “normal”-looking watches from standard manufacturers like Movado, Tag Heuer, Fossil and Tommy Hilfiger. Android Wear technology will be feeding the data. These would be much more like Q would have designed, and they will undoubtedly be worn by many who wouldn’t normally use an Apple Watch or a FitBit.
A similar technology wave is beginning to hit homes. Currently, sensors are in use in some thermostats, appliances, lighting systems, security systems, computer and gaming devices. But one of the drawbacks to having so many sensors is that most companies haven’t networked all of them to a single IoT data framework. This hinders the ability to aggregate the data across sensors, limiting the potential value. Every new data point requires a new type of sensor. As with OBD devices, attaching a sensor to everything may even become non-essential, in favor of one centrally located device with multiple sensors.
PhD students at Carnegie Mellon University have been developing a plug-in sensor package they call a “Synthetic Sensor.” Plug it into an outlet, and that room is immediately a smart room. Instead of a smart sensor on every item in the room, multiple sensors in the device track many items, people and safety concerns at once. The device can detect if anything seems to be “wrong” when appliances are in use by analyzing machine vibrations. And, of course, it can track usage patterns. The sensor can even track things insurers may not need to know, like how many paper towels are still left on a roll.
So, would P&C insurers like to be connected to the water heater thermometer, or have access to a device that can hear pops and leaks? Would L&A insurers like to know the lifestyle and behaviors of their customers to encourage healthy living? Much of this will be sorted out in the coming years.
What doesn’t need to be sorted out is that insurers will want access to device data – and they will pay for it. They will need to be running systems that will readily hold the data, analyze it and use it effectively. Cloud storage of device data and even cloud analytics will play a tremendous role in giving value to IoT data streams.
IoT advancements are exciting! They hold promise for insurers, and they certainly will make many of our environments safer and smarter.
Jeff Heepke knows where to plant corn on his 4,500-acre farm in Illinois because of artificial intelligence (AI). He uses a smartphone app called Climate Basic, which divides Heepke’s farmland (and, in fact, the entire continental U.S.) into plots that are 10 meters square. The app draws on local temperature and erosion records, expected precipitation, soil quality and other agricultural data to determine how to maximize yields for each plot. If a rainy cold front is expected to pass by, Heepke knows which areas to avoid watering or irrigating that afternoon. As the U.S. Department of Agriculture noted, this use of artificial intelligence across the industry has produced the largest crops in the country’s history.
Climate Corp., the Silicon Valley–based developer of Climate Basic, also offers a more advanced AI app that operates autonomously. If a storm hits a region, or a drought occurs, it lowers local yield numbers. Farmers who have bought insurance to supplement their government coverage get a check; no questions asked, no paper filing necessary. The insurance companies and farmers both benefit from having a much less labor-intensive, more streamlined and less expensive automated claims process.
Monsanto paid nearly $1 billion to buy Climate Corp. in 2013, giving the company’s models added legitimacy. Since then, Monsanto has continued to upgrade the AI models, integrating data from farm equipment and sensors planted in the fields so that they improve their accuracy and insight as more data is fed into them. One result is a better understanding of climate change and its effects — for example, the northward migration of arable land for corn, or the increasing frequency of severe storms.
Applications like this are typical of the new wave of artificial intelligence in business. AI is generating new approaches to business models, operations and the deployment of people that are likely to fundamentally change the way business operates. And if it can transform an earthbound industry like agriculture, how long will it be before your company is affected?
An Unavoidable Opportunity
Many business leaders are keenly aware of the potential value of artificial intelligence but are not yet poised to take advantage of it. In PwC’s 2017 Digital IQ survey of senior executives worldwide, 54% of the respondents said they were making substantial investments in AI today. But only 20% said their organizations had the skills necessary to succeed with this technology (see “Winning with Digital Confidence,” by Chris Curran and Tom Puthiyamadam).
Reports on artificial intelligence tend to portray it as either a servant, making all technology more responsive, or an overlord, eliminating jobs and destroying privacy. But for business decision makers, AI is primarily an enabler of productivity. It will eliminate jobs, to be sure, but it will also fundamentally change work processes and might create jobs in the long run. The nature of decision making, collaboration, creative art and scientific research will all be affected; so will enterprise structures. Technological systems, including potentially your products and services, as well as your office and factory equipment, will respond to people (and one another) in ways that feel as if they are coming to life.
In their book Artificial Intelligence: A Modern Approach (Pearson, 1995), Stuart Russell and Peter Norvig define AI as “the designing and building of intelligent agents that receive percepts from the environment and take actions that affect that environment.” The most critical difference between AI and general-purpose software is in the phrase “take actions.” AI enables machines to respond on their own to signals from the world at large, signals that programmers do not directly control and therefore can’t anticipate.
The fastest-growing category of AI is machine learning, or the ability of software to improve its own activity by analyzing interactions with the world at large (see “The Road to Deep Learning,” below). This technology, which has been a continual force in the history of computing since the 1940s, has grown dramatically in sophistication during the last few years.
This may be the first moment in AI’s history when a majority of experts agree the technology has practical value. From its conceptual beginnings in the 1950s, led by legendary computer scientists such as Marvin Minsky and John McCarthy, its future viability has been the subject of fierce debate. As recently as 2000, the most proficient AI system was roughly comparable, in complexity, to the brain of a worm. Then, as high-bandwidth networking, cloud computing, and high-powered graphics-enabled microprocessors emerged, researchers began building multilayered neural networks — still extremely slow and limited in comparison with natural brains, but useful in practical ways.
The best-known AI triumphs — in which software systems beat expert human players in Jeopardy, chess, Go, poker and soccer — differ from most day-to-day business applications. These games have prescribed rules and well-defined outcomes; every game ends in a win, loss or tie. The games are also closed-loop systems: They affect only the players, not outsiders. The software can be trained through multiple failures with no serious risks. You can’t say the same of an autonomous vehicle crash, a factory failure or a mistranslation.
There are currently two main schools of thought on how to develop the inference capabilities necessary for AI programs to navigate through the complexities of everyday life. In both, programs learn from experience — that is, the responses and reactions they get influence the way the programs act thereafter. The first approach uses conditional instructions (also known as heuristics) to accomplish this. For instance, an AI bot would interpret the emotions in a conversation by following a program that instructed it to start by checking for emotions that were evident in the recent past.
The second approach is known as machine learning. The machine is taught, using specific examples, to make inferences about the world around it. It then builds its understanding through this inference-making ability, without following specific instructions to do so. The Google search engine’s “next-word completion” feature is a good example of machine learning. Type in the word artificial, and several suggestions for the next word will appear, perhaps intelligence, selection and insemination. No one has programmed the search engine to seek those complements. Google chose the strategy of looking for the three words most frequently typed after artificial. With huge amounts of data available, machine learning can provide uncanny accuracy about patterns of behavior.
The type of machine learning called deep learning has become increasingly important. A deep learning system is a multilayered neural network that learns representations of the world and stores them as a nested hierarchy of concepts many layers deep. For example, when processing thousands of images, it recognizes objects based on a hierarchy of simpler building blocks: straight lines and curved lines at the basic level, then eyes, mouths and noses, and then faces, and then specific facial features. Besides image recognition, deep learning appears to be a promising way to approach complex challenges such as speech comprehension, human-machine conversation, language translation and vehicle navigation (see Exhibit A).
Though it is the closest machine to a human brain, a deep learning neural network is not suitable for all problems. It requires multiple processors with enormous computing power, far beyond conventional IT architecture; it will learn only by processing enormous amounts of data; and its decision processes are not transparent.
News aggregation software, for example, had long relied on rudimentary AI to curate articles based on people’s requests. Then it evolved to analyze behavior, tracking the way people clicked on articles and the time they spent reading, and adjusting the selections accordingly. Next it aggregated individual users’ behavior with the larger population, particularly those who had similar media habits. Now it is incorporating broader data about the way readers’ interests change over time, to anticipate what people are likely to want to see next, even if they have never clicked on that topic before. Tomorrow’s AI aggregators will be able to detect and counter “fake news” by scanning for inconsistencies and routing people to alternative perspectives.
AI applications in daily use include all smartphone digital assistants, email programs that sort entries by importance, voice recognition systems, image recognition apps such as Facebook Picture Search, digital assistants such as Amazon Echo and Google Home and much of the emerging Industrial Internet. Some AI apps are targeted at minor frustrations — DoNotPay, an online legal bot, has reversed thousands of parking tickets — and others, such as connected car and language translation technologies, represent fundamental shifts in the way people live. A growing number are aimed at improving human behavior; for instance, GM’s 2016 Chevrolet Malibu feeds data from sensors into a backseat driver–like guidance system for teenagers at the wheel.
Despite all this activity, the market for AI is still small. Market research firm Tractica estimated 2016 revenues at just $644 million. But it expects hockey stick-style growth, reaching $15 billion by 2022 and accelerating thereafter. In late 2016, there were about 1,500 AI-related startups in the U.S. alone, and total funding in 2016 reached a record $5 billion. Google, Facebook, Microsoft, Salesforce.com and other tech companies are snapping up AI software companies, and large, established companies are recruiting deep learning talent and, like Monsanto, buying AI companies specializing in their markets. To make the most of this technology in your enterprise, consider the three main ways that businesses can or will use AI:
Assisted intelligence, now widely available, improves what people and organizations are already doing.
Augmented intelligence, emerging today, enables organizations and people to do things they couldn’t otherwise do.
Autonomous intelligence, being developed for the future, creates and deploys machines that act on their own.
Many companies will make investments in all three during the next few years, drawing from a wide variety of applications (see Exhibit 1). They complement one another but require different types of investment, different staffing considerations and different business models.
Assisted intelligence amplifies the value of existing activity. For example, Google’s Gmail sorts incoming email into “Primary,” “Social” and “Promotion” default tabs. The algorithm, trained with data from millions of other users’ emails, makes people more efficient without changing the way they use email or altering the value it provides.
Assisted intelligence tends to involve clearly defined, rules-based, repeatable tasks. These include automated assembly lines and other uses of physical robots; robotic process automation, in which software-based agents simulate the online activities of a human being; and back-office functions such as billing, finance and regulatory compliance. This form of AI can be used to verify and cross-check data — for example, when paper checks are read and verified by a bank’s ATM. Assisted intelligence has already become common in some enterprise software processes. In “opportunity to order” (basic sales) and “order to cash” (receiving and processing customer orders), the software offers guidance and direction that was formerly available only from people.
The Oscar W. Larson Co. used assisted intelligence to improve its field service operations. This is a 70-plus-year-old family-owned general contractor, which, among other services to the oil and gas industry, provides maintenance and repair for point-of-sales systems and fuel dispensers at gas stations. One costly and irritating problem is “truck rerolls”: service calls that have to be rescheduled because the technician lacks the tools, parts or expertise for a particular issue. After analyzing data on service calls, the AI software showed how to reduce truck rerolls by 20%, a rate that should continue to improve as the software learns to recognize more patterns.
Assisted intelligence apps often involve computer models of complex realities that allow businesses to test decisions with less risk. For example, one auto manufacturer has developed a simulation of consumer behavior, incorporating data about the types of trips people make, the ways those affect supply and demand for motor vehicles and the variations in those patterns for different city topologies, marketing approaches and vehicle price ranges. The model spells out more than 200,000 variations for the automaker to consider and simulates the potential success of any tested variation, thus assisting in the design of car launches. As the automaker introduces cars and the simulator incorporates the data on outcomes from each launch, the model’s predictions will become ever more accurate.
AI-based packages of this sort are available on more and more enterprise software platforms. Success with assisted intelligence should lead to improvements in conventional business metrics such as labor productivity, revenues or margins per employee and average time to completion for processes. Much of the cost involved is in the staff you hire, who must be skilled at marshaling and interpreting data. To evaluate where to deploy assisted intelligence, consider two questions: What products or services could you easily make more marketable if they were more automatically responsive to your customers? Which of your current processes and practices, including your decision-making practices, would be more powerful with more intelligence?
Augmented intelligence software lends new capability to human activity, permitting enterprises to do things they couldn’t do before. Unlike assisted intelligence, it fundamentally alters the nature of the task, and business models change accordingly.
For example, Netflix uses machine learning algorithms to do something media has never done before: suggest choices customers would probably not have found themselves, based not just on the customer’s patterns of behavior but on those of the audience at large. A Netflix user, unlike a cable TV pay-per-view customer, can easily switch from one premium video to another without penalty, after just a few minutes. This gives consumers more control over their time. They use it to choose videos more tailored to the way they feel at any given moment. Every time that happens, the system records that observation and adjusts its recommendation list — and it enables Netflix to tailor its next round of videos to user preferences more accurately. This leads to reduced costs and higher profits per movie, and a more enthusiastic audience, which then enables more investments in personalization (and AI). Left outside this virtuous circle are conventional advertising and television networks. No wonder other video channels, such as HBO and Amazon, as well as recorded music channels such as Spotify, have moved to similar models.
Over time, as algorithms grow more sophisticated, the symbiotic relationship between human and AI will further change entertainment industry practices. The unit of viewing decision will probably become the scene, not the story; algorithms will link scenes to audience emotions. A consumer might ask to see only scenes where a Meryl Streep character is falling in love, or to trace a particular type of swordplay from one action movie to another. Data accumulating from these choices will further refine the ability of the entertainment industry to spark people’s emotions, satisfy their curiosity and gain their loyalty.
Another current use of augmented intelligence is in legal research. Though most cases are searchable online, finding relevant precedents still requires many hours of sifting through past opinions. Luminance, a startup specializing in legal research, can run through thousands of cases in a very short time, providing inferences about their relevance to a current proceeding. Systems like these don’t yet replace human legal research. But they dramatically reduce the rote work conducted by associate attorneys, a job rated as the least satisfying in the U.S. Similar applications are emerging for other types of data sifting, including financial audits, interpreting regulations, finding patterns in epidemiological data and (as noted above) farming.
To develop applications like these, you’ll need to marshal your own imagination to look for products, services or processes that would not be possible at all without AI. For example, an AI system can track a wide number of product features, warranty costs, repeat purchase rates and more general purchasing metrics, bringing only unusual or noteworthy correlations to your attention. Are a high number of repairs associated with a particular region, material or line of products? Could you use this information to redesign your products, avoid recalls or spark innovation in some way?
The success of an augmented intelligence effort depends on whether it has enabled your company to do new things. To assess this capability, track your margins, innovation cycles, customer experience and revenue growth as potential proxies. Also watch your impact on disruption: Are your new innovations doing to some part of the business ecosystem what, say, ride-hailing services are doing to conventional taxi companies?
You won’t find many off-the-shelf applications for augmented intelligence. They involve advanced forms of machine learning and natural language processing, plus specialized interfaces tailored to your company and industry. However, you can build bespoke augmented intelligence applications on cloud-based enterprise platforms, most of which allow modifications in open source code. Given the unstructured nature of your most critical decision processes, an augmented intelligence application would require voluminous historical data from your own company, along with data from the rest of your industry and related fields (such as demographics). This will help the system distinguish external factors, such as competition and economic conditions, from the impact of your own decisions.
The greatest change from augmented intelligence may be felt by senior decision makers, as the new models often give them new alternatives to consider that don’t match their past experience or gut feelings. They should be open to those alternatives, but also skeptical. AI systems are not infallible; just like any human guide, they must show consistency, explain their decisions and counter biases, or they will lose their value.
Very few autonomous intelligence systems — systems that make decisions without direct human involvement or oversight — are in widespread use today. Early examples include automated trading in the stock market (about 75% of Nasdaq trading is conducted autonomously) and facial recognition. In some circumstances, algorithms are better than people at identifying other people. Other early examples include robots that dispose of bombs, gather deep-sea data, maintain space stations and perform other tasks inherently unsafe for people.
The most eagerly anticipated forms of autonomous intelligence — self-driving cars and full-fledged language translation programs — are not yet ready for general use. The closest autonomous service so far is Tencent’s messaging and social media platform WeChat, which has close to 800 million daily active users, most of them in China. The program, which was designed primarily for use on smartphones, offers relatively sophisticated voice recognition, Chinese-to-English language translation, facial recognition (including suggestions of celebrities who look like the person holding the phone) and virtual bot friends that can play guessing games. Notwithstanding their cleverness and their pioneering use of natural language processing, these are still niche applications, and still very limited by technology. Some of the most popular AI apps, for example, are small, menu- and rule-driven programs, which conduct fairly rudimentary conversations around a limited group of options.
Despite the lead time required to bring the technology further along, any business prepared to base a strategy on advanced digital technology should be thinking seriously about autonomous intelligence now. The Internet of Things will generate vast amounts of information, more than humans can reasonably interpret. In commercial aircraft, for example, so much flight data is gathered that engineers can’t process it all; thus, Boeing has announced a $7.5 million partnership with Carnegie Mellon University, along with other efforts to develop AI systems that can, for example, predict when airplanes will need maintenance. Autonomous intelligence’s greatest challenge may not be technological at all — it may be companies’ ability to build in enough transparency for people to trust these systems to act in their best interest.
As you contemplate the introduction of artificial intelligence, articulate what mix of the three approaches works best for you.
Are you primarily interested in upgrading your existing processes, reducing costs and improving productivity? If so, then start with assisted intelligence, probably with a small group of services from a cloud-based provider.
Do you seek to build your business around something new — responsive and self-driven products, or services and experiences that incorporate AI? Then pursue an augmented intelligence approach, probably with more complex AI applications resident on the cloud.
Are you developing a genuinely new technology? Most companies will be better off primarily using someone else’s AI platforms, but, if you can justify building your own, you may become one of the leaders in your market.
The transition among these forms of AI is not clean-cut; they sit on a continuum. In developing their own AI strategy, many companies begin somewhere between assisted and augmented, while expecting to move toward autonomous eventually (see Exhibit 2).
Though investments in AI may seem expensive now, the costs will decline over the next 10 years as the software becomes more commoditized. “As this technology continues to mature,” writes Daniel Eckert, a managing director in emerging technology services for PwC US, “we should see the price adhere toward a utility model and flatten out. We expect a tiered pricing model to be introduced: a free (or freemium model) for simple activities, and a premium model for discrete, business-differentiating services.”
AI is often sold on the premise that it will replace human labor at lower cost — and the effect on employment could be devastating, though no one knows for sure. Carl Benedikt Frey and Michael Osborne of Oxford University’s engineering school have calculated that AI will put 47% of the jobs in the U.S. at risk; a 2016 Forrester research report estimated it at 6%, at least by 2025. On the other hand, Baidu Research head (and deep learning pioneer) Andrew Ng recently said, “AI is the new electricity,” meaning that it will be found everywhere and create jobs that weren’t imaginable before its appearance.
At the same time that AI threatens the loss of an almost unimaginable number of jobs, it is also a hungry, unsatisfied employer. The lack of capable talent — people skilled in deep learning technology and analytics — may well turn out to be the biggest obstacle for large companies. The greatest opportunities may thus be for independent businesspeople, including farmers like Jeff Heepke, who no longer need scale to compete with large companies, because AI has leveled the playing field.
It is still too early to say which types of companies will be the most successful in this area — and we don’t yet have an AI model to predict it for us. In the end, we cannot even say for sure that the companies that enter the field first will be the most successful. The dominant players will be those that, like Climate Corp., Oscar W. Larson, Netflix and many other companies large and small, have taken AI to heart as a way to become far more capable, in a far more relevant way, than they otherwise would ever be.
Urmson’s recent “Perspectives on Self-Driving Cars” lecture at Carnegie Mellon was particularly interesting because he has had time to absorb the lessons from his long tenure at Google and translate those into his next moves at Aurora. He was also in a thoughtful space at his alma mater, surrounded by mentors, colleagues and students. And, it is early enough in his new startup’s journey that he seemed truly in “perspective” rather than “pitch” mode.
1. There is a lot more chaos on the road than most recognize.
Much of the carnage due to vehicle accidents is easy to measure. In 2015, in just the U.S., there were 35,092 killed and 2.4 million injured in 6.3 million police-reported vehicle accidents. Urmson estimates, however, that the real accident rate is really between two and 10 times greater.
Over more than two million test miles during his Google tenure, Google’s SDCs were involved in about 25 accidents. Most were not severe enough to warrant a regular police report (they were reported to the California DMV). The accidents mostly looked like this: “Self-driving car does something reasonable. Comes to a stop. Human crashes into it.” Fender bender results.
While we talk a lot about fatalities or police-reported accidents, Urmson said, “there is a lot of property damage and loss that can be cleaned up relatively easily” with driverless technology.
2. Human intent is the fundamental challenge for driverless cars.
The choices made by driverless cars are critically dependent on understanding and matching the expectations of human drivers. This includes both humans in operational control of the cars themselves and human drivers of other cars. For Urmson, the difficulty in doing this is “the heart of the problem” going forward.
To illustrate the “human factors” challenge, Urmson dissected three high-profile accidents. (He cautioned that, in the case of the Uber and Tesla crashes, he had no inside information and was piecing together what probably happened based on public information.)
Google Car Crashes With Bus; Santa Clara Transportation Authority
In the only accident where Google’s SDC was partially at fault, Google’s car was partially blocking the lane of a bus behind it (due to sand bags in its own lane). The car had to decide whether to wait for the bus to pass or merge fully into the lane. The car predicted that the remaining space in the bus’s lane was too narrow and that the bus driver would have to stop. The bus driver looked at the situation and thought “I can make it,” and didn’t stop. The car went. The bus did, too. Crunch.
The Uber SDC was in the leftmost lane of three lanes. The traffic in the two lanes to its right were stopped due to congested traffic. The Uber car’s lane was clear, so it continued to move at a good pace.
A human driver wanted to turn left across the three lanes. The turning car pulled out in front of the cars in the two stopped lanes. The driver probably could not see across the blocked lanes to the Uber car’s lane and, given the stopped traffic, expected that whatever might be driving down that lane would be moving slower. It pulled into the Uber car’s lane to make the turn, and the result was a sideways parked car.
The driver had been using Tesla’s Autopilot for a long time, and he trusted it—despite Tesla saying, “Don’t trust it.” Tesla user manuals told drivers to keep their hands on the wheel, eyes in front, etc. The vehicle was expecting that the driver was paying attention and would act as the safety check. The driver thought that Autopilot worked well enough on its own. A big truck pulled in front of the car. Autopilot did not see it. The driver did not intervene. Fatal crash.
Tesla, to its credit, has made modifications to improve the car’s understanding about whether the driver is paying attention. To Urmson, however, the crash highlights the fundamental limitation of relying on human attentiveness as the safety mechanism against car inadequacies.
3. Incremental driver assistance systems will not evolve into driverless cars.
Urmson characterized “one of the big open debates” in the driverless car world as between Tesla’s (and other automakers’) vs. Google’s approach. The former’s approach is “let’s just keep on making incremental systems and, one day, we’ll turn around and have a self-driving car.” The latter is “No, no, these are two distinct problems. We need to apply different technologies.”
Urmson is still “fundamentally in the Google camp.” He believes there is a discrete step in the design space when you have to turn your back on human intervention and trust the car will not have anyone to take control. The incremental approach, he argues, will guide developers down a selection of technologies that will limit the ability to bridge over to fully driverless capabilities.
4. Don’t let the “Trolley Car Problem” make the perfect into the enemy of the great.
The “trolley car problem” is a thought experiment that asks how driverless cars should handle no-win, life-threatening scenarios—such as when the only possible choices are between killing the car’s passenger or an innocent bystander. Some argue that driverless cars should not be allowed to make such decisions.
Urmson, on the other hand, described this as an interesting philosophical problem that should not be driving the question of whether to bring the technology to market. To let it do so would be “to let the perfect be the enemy of the great.”
Urmson offered a two-fold pragmatic approach to this ethical dilemma. First, cars should never get into such situations. “If you got there, you’ve screwed up.” Driverless cars should be conservative, safety-first drivers that can anticipate and avoid such situations. “If you’re paying attention, they don’t just surprise and pop out at you,” he said. Second, if the eventuality arose, a car’s response should be predetermined and explicit. Tell consumers what to expect and let them make the choice. For example, tell consumers that the car will prefer the safety of pedestrians and will put passengers at risk to protect pedestrians. Such an explicit choice is better than what occurs with human drivers, Urmson argues, who react instinctually because there is not enough time to make any judgment at all.
5. The “mad rush” is justified.
Urmson reminisced about the early days when he would talk to automakers and tier 1 suppliers about the Google program and he “literally got laughed at.” A lot has changed in the last five years, and many of those skeptics have since invested billions in competing approaches.
Urmson points to the interaction between automation, environmental standards, electric vehicles and ride sharing as the driving forces behind the rush toward driverless. (Read more about this virtuous cycle.) Is it justified? He thinks so, and points to one simple equation to support his position:
3 Trillion VMT * $0.10 per mile = $300B per year
In 2016, vehicles in the U.S. traveled about 3.2 trillion miles. If you could bring technology to bear to reduce the cost or increase the quality of those miles and charge 10 cents per mile, that would add up to $300 billion in annual revenue—just in the U.S.
This equation, he points out, is driving the market infatuation with Transportation as a Service (TaaS) business models. The leading contenders in the emerging space, Uber, Lyft and Didi, have a combined market valuation of about $110 billion—roughly equal to the market value of GM, Ford and Chrysler. Urmson predicts that one of these clusters will see its market value double in the next four years. The race is to see who reaps this increased value.
To the inevitable question of “when,” Urmson is very optimistic. He predicts that self-driving car services will be available in certain communities within the next five years.
You won’t get them everywhere. You certainly are not going to get them in incredibly challenging weather or incredibly challenging cultural regions. But, you’ll see neighborhoods and communities where you’ll be able to call a car, get in it, and it will take you where you want to go.
Then, over the next 20 years, Urmson believes we’ll see a large portion of the transportation infrastructure move over to automation.
Urmson concluded his presentation by calling it an exciting time for roboticists. “It’s a pretty damn good time to be alive. We’re seeing fundamental transformations to the structure of labor and the structure transportation. To be a part of that and have a chance to be involved in it is exciting.”
Uber isn’t alone. Silicon Valley is gaining a reputation for being obsessed with making money at any cost, i.e. Theranos, which made false claims and risked lives. The tech industry is becoming too much like the finance industry, which a decade ago caused the Great Recession with its greed.
The irony is that both industries compete for top engineering talent from our colleges. And each corrupts these students in a different way. Finance uses their knowledge to engineer our financial system, while tech focuses it on making money rather than on lifting up humanity.
My greatest fear after joining Duke’s engineering school in 2004 was that my students would end up joining investment banks or management consultancies or, when they joined the tech industry, would act as Uber and Theranos executives have. We teach our students core technologies but do not give them the vision to better the world.
That is why we need people with good values and ethics leading the way. We need innovators who care about enriching humanity rather than just themselves. We need people who give back to the world and make it a better place. There are positive examples, of course, with successful executives like Bill Gates devoting large portions of their wealth to public health and other notable causes. These are the values we need to instill in our engineering students — before they absorb the corruption of our investment banks and big business.
This year, Carnegie Mellon’s engineering dean, James Garrett, presented me with the opportunity to teach students how they might use technology to solve humanity’s grand challenges and build billion-dollar businesses by helping 1 billion people. I jumped at the chance.
I wanted to try an experiment: teaching students the potential of technology to solve big problems like clean water, energy, education, disease and hunger. The idea is not to build silly apps, as Silicon Valley does, but to design real solutions to global problems.
A decade ago, it would have seemed wishful thinking to say that students could effect change on such a scale. It was only governments and big research labs that could solve grand challenges — and they required big grants and budgets. But that is no longer the case. The cost of building world-changing innovations has fallen so low that motivated graduates can do it.
These young dreamers can build technologies that solve these problems. Unconstrained by the idea of what is impossible, they can help take us into a world in which we worry more about sharing prosperity than about fighting over what little we have.
Witness the threshold we have already crossed with Moore’s law. Our smartphones are many times faster than the supercomputers of yesteryear and, by 2023, will exceed the human brain in both processing and storing information. We are seeing exponential advances in technologies such as sensors, artificial intelligence, robotics and genomics. And their convergence is making amazing things possible.
Cheap sensors and networks, for example, are enabling the development of a web of connected devices, called the Internet of Things. Besides increasing the energy efficiency of our homes and tracking our bodily functions, this web of sensors enables the automation of manufacturing, the creation of smart grids and cities, and a revolution in agriculture. The combination of sensors, artificial intelligence and computers enables robots to do the work of humans: to assemble electronics, drive cars and look after the elderly. And digital tutors can take students into virtual-reality worlds and teach them engineering, mathematics, language and world history.
The same technologies are enabling entrepreneurs to transform healthcare. We can use artificial intelligence to help us learn how the environment, including the food we eat and the medicines we take, affects the complex interplay between our genes and our organisms. The human genome has been mapped digitally, and artificial intelligence may even enable us to engineer cures for certain diseases.
But these technologies all have a dark side and can be used in destructive ways. As easily as we can edit genes, we can create killer viruses, alter the human germ line and inadvertently destroy ecosystems dependent upon an insect we casually exterminate. As easily as nursing the elderly, robots can becomekilling machines. Our future can be either a “Star Trek” utopia or a “Mad Max” wreck; it all depends on the choices we make and how we educate our students.
I have no idea whether my attempts at Carnegie Mellon will succeed in equipping these young engineers with the values to pursue something more worthwhile than personal gain at global expense, but it is certainly worth a try. Our students are our future, and that motivates me to enable them to fulfill grand visions. We need to launch similar experiments in schools across the U.S. and the world.
The third annual HITLAB Innovators Summit and World Cup Competition will be held at Columbia University in New York on Nov. 29 to Dec. 1. This outstanding summit brings together the best and the brightest from the emerging healthcare technology industry, academia, medicine, public health and healthcare business leaders. This year’s summit is titled; “Opportunities and Obstacles in Digital Health Diffusion,” and it will include a panel of experts who will also serve as judges when the summit culminates in the HITLAB World Cup global health innovation competition.
Five finalists will be named, and they will present their vision for an emerging technology innovation to help address global public health issues. An overall winner will be named at the close of the summit.
As we wait to see what this year presents, let’s look at how last year’s five finalists are doing. In a word, they are thriving.
Last year’s HITLAB World Cup winner, Ceeable, has developed a digital vision care mobile app designed to help prevent blindness and other eye diseases. Since last year’s competition, Ceeable has had an incredible year, including winning multiple national awards. New patents for this automated detection and analysis of visual field test results for optic nerve and retinal disease have just been issued in the past few weeks to Caltech. Ceeable now has an exclusive license to this technology from Caltech.
“These patents are a powerful application of machine learning and offer an ability to aid in the automated detection of eye disease on a digital platform,” says Dr. Wolfgang Fink, chief technology officer and inventor of the Ceeable technology. Ceeable was among the top-3 finalists at this year’s American Medical Association’s Healthier Nation Innovation Challenge as one of the “Best New Ideas for Creating a Healthier Nation” and has been profiled in Ophthalmology Times.
There are more than 300 million people worldwide who suffer from retinal disease. This technology platform — known as the Ceeable Visual Field Analyzer (CVFA) — has the potential to reach more people in need than ever before. All you need is a laptop, or tablet and connection to the internet.
This technology is now in use in some of the leading medical centers in the U.St. Ceeable is now actively establishing sales and marketing channels for the commercial launch this quarter.
Rubitection, based in Pittsburgh, has won many healthcare technology awards and placed second in last year’s competition. Sanna Gaspard, PhD and CEO/founder, has developed the technology to modernize early bedsore detection and management to help reduce the risks and improve patient care through a reliable, low-cost handheld diagnostic tool.
Bedsores, also known as pressure ulcers or pressure sores, have been a patient safety issue dating back at least to Florence Nightingale in the 19th century. In the U.S. alone, bedsores affect approximately 2.5 to three million adults each year, with related complications and infections leading to 60,000 deaths and a cost of $11 billion. One alarming study found that 60% of elderly patients with a diagnosis of bedsores die within one year of discharge from the hospital.
At that rate, an estimated 160 people a day in the U.S. will die from complications caused by infections because of bedsores, making these pressure ulcers one of the most prolific dangers facing an elderly patient today. Many medical researchers believe the problem is actually getting worse because of the aging population and a nursing shortage, along with our continued fragmented healthcare system. Many nursing professionals believe that bedsores developed after patient admission are a sign of negligent nursing care, or, as Florence Nightingale said in 1859, “If the patient has a bedsore, it’s generally not the fault of the disease, but of nursing.” Modern nursing professionals call the development of bedsores post-admission to a hospital or nursing home “inexcusable.”
Rubitection is supported by Carnegie Mellon University through the Project Olympus incubator program. The current goal of Sanna Gaspard and Rubitection is to help raise awareness and to continue to build relationships with nursing homes, hospitals and insurance companies looking for solutions to prevent bedsores from occurring in the first place and early detection to prevent infections and complications that can have devastating results.
Ristcall is another 2015 HITLAB finalist that is supported by Carnegie Mellon University through the Project Olympus incubator program. Srinath Vaddepally from Ristcall has created what I refer to as a “mobile smartwatch nursing station.” Ristcall has now upgraded and tested both the hardware and software involved with this very promising wireless wearable technology. It is designed to help nurses more effectively handle the multiple tasks of providing quality patient care and to better prioritizing their precious time. Vaddepally came up with idea when, as a hospital patient, he fell and could not reach his call button to get help.
Slips and falls in hospitals and nursing homes are a major patient safety issue and major liability issue. It is estimated that 700,000 to 1 million falls occur every year among patients, visitors, nurses and facility support staff. These facilities face both liability issues and reduced payment from Medicare as a result.
The Ristcall smartwatch allows nurses to respond and prioritize patient care in real time. As I have said before, nurses rock! They are the heart, soul and backbone of our healthcare system. And I think nurses are going to love this technology. The Ristcall technology is now being used by patients and nurses in both a nursing home and an acute-care hospital in Pittsburgh.
Noninvasix, another 2015 HITLAB World Cup finalist, is pursuing simply remarkable technology with the potential to reduce brain injuries in premature newborns by 90%. Graham Randall, PhD, the CEO of Noninvasix, and his medical research team have made a major pivot this year to focus this technology solely on monitoring oxygen levels in premature babies in neonatal intensive care units. Noninvasix is now developing a final version of this technology that will undergo a FDA 510k clearance review within three years.
Noninvasix commissioned a third-party value analysis, which estimated health insurers could save between $2.4 million and $6.2 million in annual costs to care for children with cerebral palsy resulting from the lack of sufficient oxygen in the brain by using this technology. More importantly, Randall states the entire key to preventing birth defects such as cerebral palsy is being able to monitor premature baby oxygen levels in the brain in real time to allow prompt intervention to dramatically reduce the risk and number of brain injuries caused by the lack of oxygen.
Gary Hankins, MD, the vice chair of the American College of Obstetrics and Gynecology Task Force on Neonatal Encephalopathy and Cerebral Palsy, stated: “This technology has the potential to eliminate 90% of the cases of hypoxic ischemic encephalopathy and subsequent permanent injuries such as cerebral palsy.”
This new technology will, I hope, replace current technologies such as fetal heart monitors that obviously monitor heart rates but do not accurately measure the levels of oxygen in the brain and produce results that are indeterminate or unknown 80% of the time. The lack of oxygen, or hypoxia, is thought to be responsible for nearly 25% of neonatal mortality in the world.
Now all the extraordinary work from these 2015 finalists is exactly the type of technological innovation the HITLAB World Cup is all about.
Wellopp has had a remarkable year. Wellopp is focused on the major problem of hospital re-admissions and ineffective discharge planning. It is estimated that $26 billion is spent annually in the U.S. because of hospital readmissions. The reduction of readmission rates is a major initiative both within HHS and Obamacare and the Joint Commission on Accreditation of Hospitals.
Wellopp has designed interactive software for hospital patients, health plan payers and hospital discharge planners. Joe Gough, the CEO and founder, mentioned last year that most hospital discharge plans are thrown in the wastebasket. This digital discharge technology requires the patient to take ownership and help design his or her own shared post-discharge recovery goals through a patient dashboard that provides a daily care path in a real-time, three-way interactive process. In addition, this patient-centric program includes the Wellopp rewards program, where patients get points toward a tangible prize (such as a smart phone) depending on their risk level and adherence to medication and other recommended post-discharge recovery regiments. This three-way, interactive digital approach, which sends patient care messages regarding achieving and rewarding goals, has already achieved incredible results.
Wellopp is working with the largest health system in Michigan and has reduced readmissions 48% for pneumonia patients covered under the health plan. Next, in the first quarter of 2017, Wellopp will be working with a large regional health insurance plan in Ohio and will be conducting a pilot and joint venture with an Ohio Accountable Care Organization (ACO).
(Note: Gough rebranded the original company, “Homeward Healthcare,” after a major launch this year for this consumer-directed brand.)
I have spent the past 35 years attending and speaking at conferences around the country and have enjoyed virtually every one of them — but there is nothing like the HITLAB summit. Most conferences discuss current events and vendors/sponsors showcase their current capabilities. At HITLAB, you will have the opportunity to see where healthcare is going to be 10-20 years from now and how emerging technologies can help address global public health issues like never before.
Lauren Alviti McGlade, the director of the HITLAB summit, stated, “We are searching for original ideas to improve healthcare access, delivery and outcomes through technology.” HITLAB will be accepting applications for the World Cup competition through Nov. 11. For more information, contact firstname.lastname@example.org.
My goal is to continue to try to help promote this amazing collaboration surrounding the HITLAB Summit, the sponsors, medical researchers, emerging technologies and the startup companies presenting. Some technologies may be 10 to 20 years down the road, but others, like last year’s finalists, are available now or in the very near future. Why wait?