Who do property and casualty insurance customers turn to when they need help?
In the past, answers have included insurance agents, customer helplines and company websites. Today, however, customers are increasingly likely to consult Alexa, Siri or Cortana.
As voice assistants gain popularity in homes, in cars and on smartphones, they’re also gaining traction as a marketing tool. Here, we look at the ways in which insurance companies are using voice assistants as part of their marketing and sales strategy, as well as what to expect in the near future.
How Voice Assistants Are Changing Marketing
Voice assistants commonly come in one of two forms: wireless speakers that can be placed in the home or office, or as built-in tools on smartphones. iPhones and various Android devices have had them for a few years now.
In some ways, voice assistants work similarly to visual or text-based tools like smartphone apps and Google search bars. The user asks a question or enters a command, and the device responds to it. Voice assistants like Alexa even offer apps, or “skills,” that work similarly to smartphone apps — except they rely on audio rather than visuals to share information, TechCrunch’s Sarah Perez writes.
The audio-based approach changes the ways in which both search results and apps work on voice assistant devices. A text-based Google search, for instance, returns a list of links from which the user can choose. A voice-based search, however, tends to return the single response the AI thinks best fits the user’s query.
Some experts praise this option for its speed and flexibility. “Since voice flattens menus, it will make daily tasks far easier to complete,” Jelli CEO Mike Dougherty says. Yet it also puts additional pressure on marketing teams to ensure that their content gets chosen by the various search engines that inform each voice-based device, says Richard Yao, senior associate of strategy and content at IPG Media Lab.
Voice assistants haven’t just changed how search results are presented. They have also changed how users launch searches in the first place, says More Visibility’s Jill Goldstein. While text-based searches tend to focus on two or three keywords, voice-based searches use full, natural-language sentences. These often start with question words like “what,” “how” or “when.”
These questions give marketers insight into where shoppers are in their buying journey and how best to meet their needs — but only if marketing teams are collecting and using this information, says Tyler Riddell, vice president of marketing for eSUB Construction Software.
Not only are marketing teams learning to adapt to the differences between audio and visual, but they’re also learning how to adapt to a search tool that adapts itself.
Because voice assistants use artificial intelligence and machine learning, they can adapt to changes in search terms, says Gartner analyst Ranjit Atwal. The onboard AI is designed to learn over time, gaining a better sense of how users frame their queries and the sort of information they may be looking for.
‘Alexa, Find Me Auto Insurance’: The Rising Demand for Voice Search
Based on recent sales trends, 55% of U.S. households are expected to have a smart home speaker, with voice assistance enabled, in their houses by the end of 2019, Dara Treseder at Adweek reports. Voice assistants are also a mainstay of many smartphones, from Apple’s Siri to Google’s voice search option triggered by saying, “OK, Google.”
Insurance customers increasingly prefer to include digital channels in their search for property and casualty insurance. With voice assistants occupying millions of smartphones and a wide range of other devices, customers increasingly prefer to rely on these tools, as well.
Nearly half (46%) of insurance customers already use voice search tools at least once per day, according to Shane Closser at Property Casualty 360. One in four want their voice assistants to be able to give them more information on insurance agents and products. One in three wanted to use voice assistants to book appointments with a particular insurance agent.
Service-based companies that offer “highly complex and highly personal” services are uniquely suited to thrive in the voice search era, says Adweek’s Julia Stead. While Stead focuses on travel, finance and healthcare, her analysis applies to P&C insurers, as well, because these companies also offer services that have long been accessed via voice (phone), are tailored to the needs of each customer and often require access at odd locations or hours.
And while the conversation about tech innovation often focuses on younger users, voice assistants are increasingly popular with older insurance customers.
Lauryn Chamberlain at GeoMarketing.com says that 37% of consumers age 50 and older say they use a voice assistant, often because simply speaking to a smart speaker or phone is easier than tapping, swiping or reducing a question to its key search terms. In other words, older users can think of their voice assistants as a helpful background entity rather than as a device.
In short, voice assistants are cutting across demographics. They’re entering more homes and workspaces. And insurance customers want to use them to secure coverage.
How P&C Insurers Are Incorporating Voice Into Their Marketing
Several insurance companies are already experimenting with voice assistant tools as part of their own marketing process, according to Danni Santana at Digital Insurance. For instance, Nationwide, Liberty Mutual (and subsidiary SafeCo) and Farmers have all launched Amazon Echo Skills.
Progressive, meanwhile, joined Google Home in March 2017, the first insurance carrier to do so, according to Rachel Brown at Mobile Marketer.
Other insurance companies have experimented with different approaches. Amica Mutual Insurance, for example, launched an Alexa skill that doesn’t connect users to their individual accounts. Rather, it offers information in more than a dozen categories to help users better understand billing, discounts, storm preparation and more.
With the development of Alexa skills and similar tools, brands are thinking about how a voice assistant’s sound affects their brand development, says Jennifer Harvey, VP of branding and communications at Bynder. The choice of voice tone, pitch and speed can all send a powerful message about an insurer’s brand and culture, whether it’s reassuring, serious, cheerful or anything in between.
One of the big opportunities for insurance companies and voice assistants is access. Currently, voice assistants can take on many simple tasks but can’t always handle a transaction as complex as ensuring a customer receives the right home or auto coverage for their needs. Yet developments in AI and voice recognition indicate this may change. “Alexa is already capable of placing a complicated pizza order,” says Inbal Lavi, CEO of Webpals Group, “underscoring that voice assistants will act as more than middlemen.”
For now, however, even the digital middleman approach can benefit potential and current P&C insurance customers and the companies that serve them. “We want to enable easy access for our customers,” says Alexander Bernert, head of brand management at Zurich Insurance. “Consumers do not necessarily think of taking out disability insurance between 9 am and 5 pm, but maybe even shortly before midnight.”
It can be tough to reach an insurance agent shortly before midnight. But a voice assistant can find one, provide information and even schedule an appointment — making it easier for potential customers to turn into actual purchasers.
In a world where insurance customers already do research and contact insurers via multiple channels, voice assistants are a natural frontier for insurance marketing.
Are the Amazon Echo and Google Home insurtech? They sure are!
The two top-selling smart speakers have become so competitive that, in Canada during the past holiday season, each company undercut the other by offering the smaller versions, the Echo Dot and Google Home Mini, for as little as $39.99. For that price, why not buy one and try it out?
A recent report from Canalys states that the smart speaker is now the fastest-growing consumer technology. It is growing faster than augmented reality (AR), virtual reality (VR) and wearables, with smart speaker shipments expected to top 56 million units in 2018.
Since the launch in Canada of the Google Home in July 2017 and subsequently the Amazon Echo in December 2017, the following insurance services have been made available:
Aviva Canada — Aviva made a skill for the Amazon Echo to help consumers find answers to common insurance questions and to get an insurance quote. If a person is curious about accident benefits, for example, all they have to do is ask, “Alexa, what is accident benefits coverage?”
Manulife — Manulife’s skills for the Amazon Echo advise customers on what is left on their health benefits. Need new glasses but not sure how much coverage you have? Simply ask, “Alexa, ask Manulife Benefits how much do I have left for glasses?”
Kanetix.ca and InsuranceHotline.com — Both comparison websites made Google Assistant actions to support comparing car insurance quotes. Drivers just have to request a quote by saying, “Hey, Google, ask Kanetix.ca for a car insurance quote.”
RateSupermarket.ca has a Google Assistant action that supports finding the best mortgage rate in any province. All one has to do is say, “Hey, Google, ask RateSupermarket.ca for the best mortgage rate in Ontario.”
Not only is the smart speaker convenient for finding information, it is also spurring the sales of smart devices and IoT (Internet of Things) technology for the home such as smart plugs, smart appliances and smart entertainment…basically, smart everything. The Amazon Echo and Google Home can be used to turn on the lights, turn on the TV, change the channel and even find the best science fiction on Netflix.
Here is some recent data from ComScore and Statista showing the likelihood of IoT ownership for smart speaker households.
Google rarely has any presence at the Consumer Electronics Show, but this year Google is out in full force going head-on with Amazon Echo. Smart speaker adoption and integration to IoT devices is expected to be a megatrend at this year’s CES, which began yesterday in Las Vegas.
The adoption, sales and marketing of both Amazon and Google smart speaker assistants is clearly making this device a must-have in the home. Insurance providers cannot ignore this opportunity to develop smarter, more convenient ways to service their customers. If the smart speaker can really fire up IoT adoption in the home, insurance providers can’t ignore the data it can collect to create better products that improve the management of risk and claims for the household.
I’ve often thought the most valuable interactions happen with the people at the edge of our networks. The people we meet serendipitously, through our more distant contacts. It’s here, on the edge, where the sparks of creativity really fly.
Recently, I’ve been putting this theory to the test by taking the time to meet face-to-face with people I might more usually only connect with by email or LinkedIn.
It’s a refreshing experience. One of the many benefits is frequent exposure to new ideas and new ways of thinking from people who view the world through a different lens. There are other benefits, such as getting to see the new musical “Bat Out of Hell” in the company of an American lawyer. But I digress …
It’s not just people who benefit from networking at the edge. Computers do, too, and there’s an interesting analogy to be drawn here with the emerging importance of edge computing. This is where processing and data are placed at the edge of our networks to have the maximum effect.
Let me explain.
Over the last decade or so, we’ve seen processing and data increasingly centralized in the cloud. This has been driven partly by the cost-effectiveness and scalability of cloud computing and partly by the growth of big data.
Amazon Alexa is an excellent example of how this works. Voice commands are picked up by an Amazon Echo device, converted from speech to text and fired off to the cloud where natural language processing (NLP) and clever software are used to interpret and fulfill your requests. The results are served back to your Echo in less than half a second. Very little processing takes place on the Echo, and very little data is stored there; all the heavy lifting is done centrally in the cloud.
This model works well if the edge device (the Echo) is always connected to the cloud via the internet, the arrival rate of new data (your voice commands) is relatively low and the response time is not critical (we’re happy to wait half a second for Ed Sheeran to start his next song).
But it doesn’t work so well if the edge device is not always connected, if the volume of real-time data streaming into the device is huge or if an instant response is needed.
Imagine you’re being driven to the theater by your AI-controlled smart car equipped with hundreds of sensors gathering real-time data from every direction. If the sensors detect a child running out in front of the car, there’s no point firing that data off to the cloud for processing. It has to be processed and acted on instantly and locally by the car itself.
There are many, many more examples where the edge devices (cars, traffic lights, fitness bracelets, microwaves, safety critical sensors on assembly lines… in fact, very many of the billions of devices that will be connected to the internet of things over the coming years) will need the ability to process their own real-time data.
These edge computing devices will still connect to the cloud, but the location of the processing and the data will vary according to need — in the cloud for asynchronous machine learning insights and improvements; at the edge for real-time processing of real-time data streams to determine real-time actions.
Developing the hardware and software for these devices will require new ways of thinking. It’s not about big data; it’s about small, fast data. And I’m sure we’re going to see dramatic improvements in battery efficiency, data storage and processing capability of these intelligent edge computing devices.
The Internet of Things is actually going to become the Internet of Small Powerful Intelligent Things (although I doubt that acronym will catch on).
Most interestingly of all, though, from a cultural and business perspective, is the innovation this edge computing will enable, such as:
The insurance industry will be able to offer better deals and new types of policies driven by the intelligence embedded in the insured assets.
The health industry will be able to provide preventative care supported by intelligent wearables monitoring everything from activity to blood sugar levels.
The entertainment industry will be able to deliver interactive content without those annoying buffers and whirling circles.
And, who knows, maybe edge computing will also help us communicate more effectively with each other. Because spending time at the edge of our networks, as I have been discovering, is where the sparks of creativity really fly. Like the musical “Bat Out of Hell,” it’s one experience I can thoroughly recommend!
Jeff Heepke knows where to plant corn on his 4,500-acre farm in Illinois because of artificial intelligence (AI). He uses a smartphone app called Climate Basic, which divides Heepke’s farmland (and, in fact, the entire continental U.S.) into plots that are 10 meters square. The app draws on local temperature and erosion records, expected precipitation, soil quality and other agricultural data to determine how to maximize yields for each plot. If a rainy cold front is expected to pass by, Heepke knows which areas to avoid watering or irrigating that afternoon. As the U.S. Department of Agriculture noted, this use of artificial intelligence across the industry has produced the largest crops in the country’s history.
Climate Corp., the Silicon Valley–based developer of Climate Basic, also offers a more advanced AI app that operates autonomously. If a storm hits a region, or a drought occurs, it lowers local yield numbers. Farmers who have bought insurance to supplement their government coverage get a check; no questions asked, no paper filing necessary. The insurance companies and farmers both benefit from having a much less labor-intensive, more streamlined and less expensive automated claims process.
Monsanto paid nearly $1 billion to buy Climate Corp. in 2013, giving the company’s models added legitimacy. Since then, Monsanto has continued to upgrade the AI models, integrating data from farm equipment and sensors planted in the fields so that they improve their accuracy and insight as more data is fed into them. One result is a better understanding of climate change and its effects — for example, the northward migration of arable land for corn, or the increasing frequency of severe storms.
Applications like this are typical of the new wave of artificial intelligence in business. AI is generating new approaches to business models, operations and the deployment of people that are likely to fundamentally change the way business operates. And if it can transform an earthbound industry like agriculture, how long will it be before your company is affected?
An Unavoidable Opportunity
Many business leaders are keenly aware of the potential value of artificial intelligence but are not yet poised to take advantage of it. In PwC’s 2017 Digital IQ survey of senior executives worldwide, 54% of the respondents said they were making substantial investments in AI today. But only 20% said their organizations had the skills necessary to succeed with this technology (see “Winning with Digital Confidence,” by Chris Curran and Tom Puthiyamadam).
Reports on artificial intelligence tend to portray it as either a servant, making all technology more responsive, or an overlord, eliminating jobs and destroying privacy. But for business decision makers, AI is primarily an enabler of productivity. It will eliminate jobs, to be sure, but it will also fundamentally change work processes and might create jobs in the long run. The nature of decision making, collaboration, creative art and scientific research will all be affected; so will enterprise structures. Technological systems, including potentially your products and services, as well as your office and factory equipment, will respond to people (and one another) in ways that feel as if they are coming to life.
In their book Artificial Intelligence: A Modern Approach (Pearson, 1995), Stuart Russell and Peter Norvig define AI as “the designing and building of intelligent agents that receive percepts from the environment and take actions that affect that environment.” The most critical difference between AI and general-purpose software is in the phrase “take actions.” AI enables machines to respond on their own to signals from the world at large, signals that programmers do not directly control and therefore can’t anticipate.
The fastest-growing category of AI is machine learning, or the ability of software to improve its own activity by analyzing interactions with the world at large (see “The Road to Deep Learning,” below). This technology, which has been a continual force in the history of computing since the 1940s, has grown dramatically in sophistication during the last few years.
This may be the first moment in AI’s history when a majority of experts agree the technology has practical value. From its conceptual beginnings in the 1950s, led by legendary computer scientists such as Marvin Minsky and John McCarthy, its future viability has been the subject of fierce debate. As recently as 2000, the most proficient AI system was roughly comparable, in complexity, to the brain of a worm. Then, as high-bandwidth networking, cloud computing, and high-powered graphics-enabled microprocessors emerged, researchers began building multilayered neural networks — still extremely slow and limited in comparison with natural brains, but useful in practical ways.
The best-known AI triumphs — in which software systems beat expert human players in Jeopardy, chess, Go, poker and soccer — differ from most day-to-day business applications. These games have prescribed rules and well-defined outcomes; every game ends in a win, loss or tie. The games are also closed-loop systems: They affect only the players, not outsiders. The software can be trained through multiple failures with no serious risks. You can’t say the same of an autonomous vehicle crash, a factory failure or a mistranslation.
There are currently two main schools of thought on how to develop the inference capabilities necessary for AI programs to navigate through the complexities of everyday life. In both, programs learn from experience — that is, the responses and reactions they get influence the way the programs act thereafter. The first approach uses conditional instructions (also known as heuristics) to accomplish this. For instance, an AI bot would interpret the emotions in a conversation by following a program that instructed it to start by checking for emotions that were evident in the recent past.
The second approach is known as machine learning. The machine is taught, using specific examples, to make inferences about the world around it. It then builds its understanding through this inference-making ability, without following specific instructions to do so. The Google search engine’s “next-word completion” feature is a good example of machine learning. Type in the word artificial, and several suggestions for the next word will appear, perhaps intelligence, selection and insemination. No one has programmed the search engine to seek those complements. Google chose the strategy of looking for the three words most frequently typed after artificial. With huge amounts of data available, machine learning can provide uncanny accuracy about patterns of behavior.
The type of machine learning called deep learning has become increasingly important. A deep learning system is a multilayered neural network that learns representations of the world and stores them as a nested hierarchy of concepts many layers deep. For example, when processing thousands of images, it recognizes objects based on a hierarchy of simpler building blocks: straight lines and curved lines at the basic level, then eyes, mouths and noses, and then faces, and then specific facial features. Besides image recognition, deep learning appears to be a promising way to approach complex challenges such as speech comprehension, human-machine conversation, language translation and vehicle navigation (see Exhibit A).
Though it is the closest machine to a human brain, a deep learning neural network is not suitable for all problems. It requires multiple processors with enormous computing power, far beyond conventional IT architecture; it will learn only by processing enormous amounts of data; and its decision processes are not transparent.
News aggregation software, for example, had long relied on rudimentary AI to curate articles based on people’s requests. Then it evolved to analyze behavior, tracking the way people clicked on articles and the time they spent reading, and adjusting the selections accordingly. Next it aggregated individual users’ behavior with the larger population, particularly those who had similar media habits. Now it is incorporating broader data about the way readers’ interests change over time, to anticipate what people are likely to want to see next, even if they have never clicked on that topic before. Tomorrow’s AI aggregators will be able to detect and counter “fake news” by scanning for inconsistencies and routing people to alternative perspectives.
AI applications in daily use include all smartphone digital assistants, email programs that sort entries by importance, voice recognition systems, image recognition apps such as Facebook Picture Search, digital assistants such as Amazon Echo and Google Home and much of the emerging Industrial Internet. Some AI apps are targeted at minor frustrations — DoNotPay, an online legal bot, has reversed thousands of parking tickets — and others, such as connected car and language translation technologies, represent fundamental shifts in the way people live. A growing number are aimed at improving human behavior; for instance, GM’s 2016 Chevrolet Malibu feeds data from sensors into a backseat driver–like guidance system for teenagers at the wheel.
Despite all this activity, the market for AI is still small. Market research firm Tractica estimated 2016 revenues at just $644 million. But it expects hockey stick-style growth, reaching $15 billion by 2022 and accelerating thereafter. In late 2016, there were about 1,500 AI-related startups in the U.S. alone, and total funding in 2016 reached a record $5 billion. Google, Facebook, Microsoft, Salesforce.com and other tech companies are snapping up AI software companies, and large, established companies are recruiting deep learning talent and, like Monsanto, buying AI companies specializing in their markets. To make the most of this technology in your enterprise, consider the three main ways that businesses can or will use AI:
Assisted intelligence, now widely available, improves what people and organizations are already doing.
Augmented intelligence, emerging today, enables organizations and people to do things they couldn’t otherwise do.
Autonomous intelligence, being developed for the future, creates and deploys machines that act on their own.
Many companies will make investments in all three during the next few years, drawing from a wide variety of applications (see Exhibit 1). They complement one another but require different types of investment, different staffing considerations and different business models.
Assisted intelligence amplifies the value of existing activity. For example, Google’s Gmail sorts incoming email into “Primary,” “Social” and “Promotion” default tabs. The algorithm, trained with data from millions of other users’ emails, makes people more efficient without changing the way they use email or altering the value it provides.
Assisted intelligence tends to involve clearly defined, rules-based, repeatable tasks. These include automated assembly lines and other uses of physical robots; robotic process automation, in which software-based agents simulate the online activities of a human being; and back-office functions such as billing, finance and regulatory compliance. This form of AI can be used to verify and cross-check data — for example, when paper checks are read and verified by a bank’s ATM. Assisted intelligence has already become common in some enterprise software processes. In “opportunity to order” (basic sales) and “order to cash” (receiving and processing customer orders), the software offers guidance and direction that was formerly available only from people.
The Oscar W. Larson Co. used assisted intelligence to improve its field service operations. This is a 70-plus-year-old family-owned general contractor, which, among other services to the oil and gas industry, provides maintenance and repair for point-of-sales systems and fuel dispensers at gas stations. One costly and irritating problem is “truck rerolls”: service calls that have to be rescheduled because the technician lacks the tools, parts or expertise for a particular issue. After analyzing data on service calls, the AI software showed how to reduce truck rerolls by 20%, a rate that should continue to improve as the software learns to recognize more patterns.
Assisted intelligence apps often involve computer models of complex realities that allow businesses to test decisions with less risk. For example, one auto manufacturer has developed a simulation of consumer behavior, incorporating data about the types of trips people make, the ways those affect supply and demand for motor vehicles and the variations in those patterns for different city topologies, marketing approaches and vehicle price ranges. The model spells out more than 200,000 variations for the automaker to consider and simulates the potential success of any tested variation, thus assisting in the design of car launches. As the automaker introduces cars and the simulator incorporates the data on outcomes from each launch, the model’s predictions will become ever more accurate.
AI-based packages of this sort are available on more and more enterprise software platforms. Success with assisted intelligence should lead to improvements in conventional business metrics such as labor productivity, revenues or margins per employee and average time to completion for processes. Much of the cost involved is in the staff you hire, who must be skilled at marshaling and interpreting data. To evaluate where to deploy assisted intelligence, consider two questions: What products or services could you easily make more marketable if they were more automatically responsive to your customers? Which of your current processes and practices, including your decision-making practices, would be more powerful with more intelligence?
Augmented intelligence software lends new capability to human activity, permitting enterprises to do things they couldn’t do before. Unlike assisted intelligence, it fundamentally alters the nature of the task, and business models change accordingly.
For example, Netflix uses machine learning algorithms to do something media has never done before: suggest choices customers would probably not have found themselves, based not just on the customer’s patterns of behavior but on those of the audience at large. A Netflix user, unlike a cable TV pay-per-view customer, can easily switch from one premium video to another without penalty, after just a few minutes. This gives consumers more control over their time. They use it to choose videos more tailored to the way they feel at any given moment. Every time that happens, the system records that observation and adjusts its recommendation list — and it enables Netflix to tailor its next round of videos to user preferences more accurately. This leads to reduced costs and higher profits per movie, and a more enthusiastic audience, which then enables more investments in personalization (and AI). Left outside this virtuous circle are conventional advertising and television networks. No wonder other video channels, such as HBO and Amazon, as well as recorded music channels such as Spotify, have moved to similar models.
Over time, as algorithms grow more sophisticated, the symbiotic relationship between human and AI will further change entertainment industry practices. The unit of viewing decision will probably become the scene, not the story; algorithms will link scenes to audience emotions. A consumer might ask to see only scenes where a Meryl Streep character is falling in love, or to trace a particular type of swordplay from one action movie to another. Data accumulating from these choices will further refine the ability of the entertainment industry to spark people’s emotions, satisfy their curiosity and gain their loyalty.
Another current use of augmented intelligence is in legal research. Though most cases are searchable online, finding relevant precedents still requires many hours of sifting through past opinions. Luminance, a startup specializing in legal research, can run through thousands of cases in a very short time, providing inferences about their relevance to a current proceeding. Systems like these don’t yet replace human legal research. But they dramatically reduce the rote work conducted by associate attorneys, a job rated as the least satisfying in the U.S. Similar applications are emerging for other types of data sifting, including financial audits, interpreting regulations, finding patterns in epidemiological data and (as noted above) farming.
To develop applications like these, you’ll need to marshal your own imagination to look for products, services or processes that would not be possible at all without AI. For example, an AI system can track a wide number of product features, warranty costs, repeat purchase rates and more general purchasing metrics, bringing only unusual or noteworthy correlations to your attention. Are a high number of repairs associated with a particular region, material or line of products? Could you use this information to redesign your products, avoid recalls or spark innovation in some way?
The success of an augmented intelligence effort depends on whether it has enabled your company to do new things. To assess this capability, track your margins, innovation cycles, customer experience and revenue growth as potential proxies. Also watch your impact on disruption: Are your new innovations doing to some part of the business ecosystem what, say, ride-hailing services are doing to conventional taxi companies?
You won’t find many off-the-shelf applications for augmented intelligence. They involve advanced forms of machine learning and natural language processing, plus specialized interfaces tailored to your company and industry. However, you can build bespoke augmented intelligence applications on cloud-based enterprise platforms, most of which allow modifications in open source code. Given the unstructured nature of your most critical decision processes, an augmented intelligence application would require voluminous historical data from your own company, along with data from the rest of your industry and related fields (such as demographics). This will help the system distinguish external factors, such as competition and economic conditions, from the impact of your own decisions.
The greatest change from augmented intelligence may be felt by senior decision makers, as the new models often give them new alternatives to consider that don’t match their past experience or gut feelings. They should be open to those alternatives, but also skeptical. AI systems are not infallible; just like any human guide, they must show consistency, explain their decisions and counter biases, or they will lose their value.
Very few autonomous intelligence systems — systems that make decisions without direct human involvement or oversight — are in widespread use today. Early examples include automated trading in the stock market (about 75% of Nasdaq trading is conducted autonomously) and facial recognition. In some circumstances, algorithms are better than people at identifying other people. Other early examples include robots that dispose of bombs, gather deep-sea data, maintain space stations and perform other tasks inherently unsafe for people.
The most eagerly anticipated forms of autonomous intelligence — self-driving cars and full-fledged language translation programs — are not yet ready for general use. The closest autonomous service so far is Tencent’s messaging and social media platform WeChat, which has close to 800 million daily active users, most of them in China. The program, which was designed primarily for use on smartphones, offers relatively sophisticated voice recognition, Chinese-to-English language translation, facial recognition (including suggestions of celebrities who look like the person holding the phone) and virtual bot friends that can play guessing games. Notwithstanding their cleverness and their pioneering use of natural language processing, these are still niche applications, and still very limited by technology. Some of the most popular AI apps, for example, are small, menu- and rule-driven programs, which conduct fairly rudimentary conversations around a limited group of options.
Despite the lead time required to bring the technology further along, any business prepared to base a strategy on advanced digital technology should be thinking seriously about autonomous intelligence now. The Internet of Things will generate vast amounts of information, more than humans can reasonably interpret. In commercial aircraft, for example, so much flight data is gathered that engineers can’t process it all; thus, Boeing has announced a $7.5 million partnership with Carnegie Mellon University, along with other efforts to develop AI systems that can, for example, predict when airplanes will need maintenance. Autonomous intelligence’s greatest challenge may not be technological at all — it may be companies’ ability to build in enough transparency for people to trust these systems to act in their best interest.
As you contemplate the introduction of artificial intelligence, articulate what mix of the three approaches works best for you.
Are you primarily interested in upgrading your existing processes, reducing costs and improving productivity? If so, then start with assisted intelligence, probably with a small group of services from a cloud-based provider.
Do you seek to build your business around something new — responsive and self-driven products, or services and experiences that incorporate AI? Then pursue an augmented intelligence approach, probably with more complex AI applications resident on the cloud.
Are you developing a genuinely new technology? Most companies will be better off primarily using someone else’s AI platforms, but, if you can justify building your own, you may become one of the leaders in your market.
The transition among these forms of AI is not clean-cut; they sit on a continuum. In developing their own AI strategy, many companies begin somewhere between assisted and augmented, while expecting to move toward autonomous eventually (see Exhibit 2).
Though investments in AI may seem expensive now, the costs will decline over the next 10 years as the software becomes more commoditized. “As this technology continues to mature,” writes Daniel Eckert, a managing director in emerging technology services for PwC US, “we should see the price adhere toward a utility model and flatten out. We expect a tiered pricing model to be introduced: a free (or freemium model) for simple activities, and a premium model for discrete, business-differentiating services.”
AI is often sold on the premise that it will replace human labor at lower cost — and the effect on employment could be devastating, though no one knows for sure. Carl Benedikt Frey and Michael Osborne of Oxford University’s engineering school have calculated that AI will put 47% of the jobs in the U.S. at risk; a 2016 Forrester research report estimated it at 6%, at least by 2025. On the other hand, Baidu Research head (and deep learning pioneer) Andrew Ng recently said, “AI is the new electricity,” meaning that it will be found everywhere and create jobs that weren’t imaginable before its appearance.
At the same time that AI threatens the loss of an almost unimaginable number of jobs, it is also a hungry, unsatisfied employer. The lack of capable talent — people skilled in deep learning technology and analytics — may well turn out to be the biggest obstacle for large companies. The greatest opportunities may thus be for independent businesspeople, including farmers like Jeff Heepke, who no longer need scale to compete with large companies, because AI has leveled the playing field.
It is still too early to say which types of companies will be the most successful in this area — and we don’t yet have an AI model to predict it for us. In the end, we cannot even say for sure that the companies that enter the field first will be the most successful. The dominant players will be those that, like Climate Corp., Oscar W. Larson, Netflix and many other companies large and small, have taken AI to heart as a way to become far more capable, in a far more relevant way, than they otherwise would ever be.
When it comes to adoption of technology, simple is most often better than complex. Steve Jobs and Apple went to great lengths to make their products simple. Without user adoption, products fail. Current technology trends continue the move toward simplicity with the advent of artificial intelligence and personal assistant tools like Amazon’s Echo and the Google Home. Before you know it, these tools will enter the benefits world. The question is, who is going to be first and best? And if I am a benefits broker, how does this affect my business?
While many brokers are aware of the vendors that call on them or have booths at industry conferences, I believe the benefits technology race is going to heat up, with new competition entering the market. These new competitors see the market opportunity to automate large segments of our economy, including health insurance and healthcare. You may have heard of some of these companies, like Microsoft, Google, Salesforce.com and Apple. This would be in addition to current leaders such as ADP and Paychex. The stakes of the game will change, and the price of entry, from an investment standpoint, is in the hundreds of millions of dollars. Those with the capital will quickly outpace those with less capital.
Don’t be surprised when you start to see major mergers and acquisitions in the HR and benefits space. Could Microsoft buy Ultimate Software? Why not? Microsoft already purchased LinkedIn and recently hinted at getting deeper into the HR space.
When I look at products like the Amazon Echo and Google Home, I see products that have very quickly grabbed market share, with high rates of adoption. My wife, who is not an early adopter of technology, quickly became a user of Google Home. Why? Because it is easy. Would she have a better understanding of her health insurance if she could simply ask Google? Absolutely!
Benefits technology, on the other hand, has not had broad adoption by employees. Yes, employers have bought systems or brokers have given them away, but when you look at utilization on the employee side it is abysmal. I believe the reason for this is because there is not enough value as a stand-alone solution to generate broad adoption. Keep in mind that the majority of people hardly use their healthcare in a given year, so there is little need to access such a system. I don’t know about you, but I can hardly remember the login to my computer, never mind something I may not use for six months.
The next generation of technology in the HR and benefits area is going to have broader and “everyday” value, while being much easier to use. Market-leading vendors, especially those with a great deal of capital, will invest in the latest technologies to try to win the technology race and gain more customers. And before you know it, you will be saying the following:
“Alexa, is Dr. John Smith from Boston in the Blue Cross network?”
“Ok, Google, request Friday off from work.”
“Hey, Siri, how much does the average office visit cost?”
“Alexa, what is the balance of my 401k?”
“Ok, Google, transfer $500 from my savings to checking.”
The advancement of technology and artificial intelligence has enabled many to have more personalized user experiences. Your Amazon Echo will “get to know you.” Maybe in the near future your doctor will get to know you a little better, too.
Many benefits brokers have chosen some technology vendor with a mission of putting as many clients on the system as possible. This is a risky position competitively as more advanced solutions from highly capitalized companies come along. I don’t know many sales people or business owners in any industry who like running around with the eighth best product. Even more so when it is not necessary. The market and your customers do not care if you have invested thousands of dollars on some technology that may quickly fall out of favor.
One should take the advice of Jack Welch, ex- CEO of General Electric, who once said,
“If the rate of change on the outside exceeds the rate of change on the inside, the end is near.”
For those who have purchased the Amazon Echo or Google Home, you don’t have to look far to see that the outside world is changing faster than the inside. The health insurance and healthcare industries often feel like they are moving at a snail’s pace. Private exchanges were lauded as change, when they really are a reincarnation of cafeteria plans from the ’80s.
With the Trump administration, changes in health insurance legislation may create a shift that empowers the consumer. The industry may need an army of people on the front lines to help the industry move to a whole new paradigm. The vendors will need help and the employers, and employees will need it, too. The technology is there. Alexa is ready. Are you?