Tag Archives: artificial intelligence

How AI Will Define Insurance Workforce

Prior to COVID-19, the U.S. boasted historically low unemployment and a roaring economy. Nearly every industry was expected to face a severe talent shortage within the next 10 to 20 years. But then March hit, and the world turned upside-down.

Since then, the pendulum has swung in the other direction. The current unemployment figures are reporting as many as 10.7 million people are out of work, and, despite this sudden abundance of available workers, staffing issues remain — they’ve just become more complex.

To navigate this wildly fluctuating environment, companies will rely on data for decision-making about hiring, training and countless other matters that affect the bottom line. This will require tools, like artificial intelligence (AI), to make sense of data and to adjust quickly amid uncertainty.

The best way to examine AI’s value in today’s uncertain world is to look at how it can work within a specific industry. Doing so makes it possible to show practical applications from which lessons can then be applied to other industries.

Commercial Insurance: A Case Study

Like other industries, commercial insurance faced a significant hiring crisis pre-COVID-19. The average claims adjuster remained in the industry for just four years — about the time it takes to gain full expertise — and those workers who stuck with claims have inched closer to retirement. So, this multibillion-dollar market is at risk of losing much of the human brain trust that enables current systems to run, as new workers cannot be hired, trained and retained fast enough to balance the scales.

Fast forward to today. Commercial insurance looks markedly different. The types and volumes of claims are changing. For example, claims related to COVID-19 contact or work-from-home circumstances are rising quickly, as are post-termination claims, while traditional claims have dropped.

At the same time, access to traditional healthcare has been in flux. To combat the limitations on available providers, telehealth solutions have exploded, opening up a whole new set of providers that claims reps need to become somewhat familiar with to facilitate claims accordingly — claims that bear a greater potential for fraud and litigation, which cost companies millions of dollars each year.

In short, almost everything about claims operations has changed — and, like many other industries that have been traditionally slow to adapt to new challenges, commercial insurance faces real hurdles.

The Importance of Data and Intelligence

Data is the key to overcoming dramatic changes within a relatively static industry. Maintaining a pulse on what’s happening across a business, or with a specific claim, and how it relates to things experienced previously is important; spotting trends early is vital. Organizations require data to determine if their plans and practices are working — and, if they are not, data should be used to drive intervention and adaptation.

But thousands to millions of data points alone won’t save the day if an organization doesn’t have the capability to understand what the data is telling them. What is the context? How are points connected? If a trend continues, what will be the effects six months or two years from now?

AI systems unlock the meaning of data to make it useful, pinpointing where organizations need to make adjustments. In commercial insurance, AI could allow for expanding provider networks to offer better, faster access to care. To actually expand networks using quality providers, systems need to tap into more data to learn which providers have achieved the best outcomes on which types of cases.

What is particularly exciting about implementing AI in this rapidly changing environment is that interpretations of data are not fixed. Machine learning capabilities are constantly refining and updating insights so that organizations — and their people — can respond accordingly.

See also: How AI Transforms Risk Engineering

Designing the Future Workforce

So, if data analytics and AI become staples in modern business, how do they solve the human resource problem? What do they mean for the future workforce? The answer is threefold.

Data determines what your hiring needs actually are: In a world that is changing so quickly, your business might not need as many people specialized in a certain area, whereas new opportunities or divisions may emerge. Your business may be forced to alter its offerings to match customer needs. Data is the guide; it lets you home in on exactly what skills are required.

AI guides training: Because AI is able to analyze so much data so quickly, new hires are able to access the information and prompts they need to do their jobs well as soon as they need it. There is not as much feeling around or dependency on senior colleagues. This is not to discount the value of experience, but it means that workers can reach a competent level much faster; what they lack in experience and intuition is replaced by data-driven insights and standardized practices.

AI augments jobs: AI solutions take care of many of the rote tasks workers are routinely bogged down with today. As a result, employees can focus on making more efficient, informed decisions; they can actually use their brains more. AI flags potential errors or problems so that they can be addressed before they escalate. Reps can focus on delivering compassion at a time when people need it most.

While COVID-19 has fundamentally altered the future workforce, tools like AI help get it back on track. In leveraging it effectively, organizations will become nimbler and more responsive to conditions while employees are more knowledgeable and effective.

Beware the Dark Side of AI

Within the Biden administration’s first weeks, the Office of Science and Technology Policy has been elevated to a cabinet-level position. Biden has appointed Alondra Nelson as deputy director. She is a scholar of science, technology and social inequality. In her acceptance speech, Nelson shared, “When we provide inputs to the algorithm, when we program the device, when we design, test and research, we are making human choices.” We can expect artificial intelligence (AI) bias, ethics and accountability to be more significant issues under our new president. 

The financial services industry has a long and dark history of redlining and underserving minority communities. Regardless of regulation, insurers must take steps now to address the ethical concerns surrounding AI and data. 

Insurers are investing heavily and increasingly adopting AI and big data to improve business operations. Juniper Research estimates the value of global insurance premiums underwritten by AI will exceed $20 billion by 2024. Allstate considers its cognitive AI agent, Amelia, which has more than 250,000 conversations per month with customers, an essential component of its customer service strategy. Swiss Re Institute analyzed patent databases and found the number of machine-learning patents filed by insurers has increased dramatically from 12 in 2010 to 693 in 2018. 

There is no denying that AI and big data hold a lot of promise to transform insurance. Using AI, underwriters can spot patterns and connections at a scale impossible for a human to do. AI can accelerate risk assessments, improve fraud detection, help predict customer needs, drive lead generation and automate marketing campaigns. 

However, AI can reproduce and amplify historical human and societal biases. Some of us can still remember Microsoft’s disastrous unveiling of its new AI chatbot, Tay, on social media site Twitter five years ago. Described as an experiment in “conversational understanding,” Tay was supposed to mimic the speaking style of a teenage girl, and entertain 18- to 24-year-old Americans in a positive way. Instead of casual and playful conversations, Tay repeated back the politically incorrect, racist and sexist comments Twitter users hurled her way. In just one day, Twitter had taught Tay to be misogynistic and racist. 

In a study evaluating 189 facial recognition algorithms from more than 99 developers, the U.S. National Institute of Standards and Technology found algorithms developed in the U.S. had trouble recognizing Asian, African-American and Native-American faces. By comparison, algorithms developed in Asian countries could recognize Asian and Caucasian faces equally well.

Apple Card’s algorithm sparked an investigation by financial regulators soon after it launched when it appeared to offer wives lower credit lines than their husbands. Goldman Sachs has said its algorithm does not use gender as an input. However, gender-blind algorithms drawing on data that is biased against women can lead to unwanted biases. 

Even when we remove gender and race from algorithm-models, there remains a strong correlation of race and gender with data inputs. ZIP codes, disease predispositions, last names, criminal records, income and job titles have all been identified as proxies for race or gender. Biases creep in this way. 

See also: Despite COVID, Tech Investment Continues

There is another issue: the inexplicability of black-box predictive models. Black-box predictive models, created by machine-learning algorithms from the data inputs we provide, can be highly accurate. However, they are also so complicated that even the programmers themselves cannot explain how these algorithms reach their final predictions, according to an article in the Harvard Data Science Review. Initially developed for low-stakes decisions like online advertising or web searching, these black-box machine-learning techniques are increasingly making high-stakes decisions that affect people’s lives. 

Successful AI and data analytics users know not to go where data leads them or fall into the trap of relying on data that are biased against minority and disadvantaged communities. Big data is not always able to capture the granular insights that explain human behaviors, motivations and pain points. 

Consider Infinity Insurance, an auto insurance provider focused on offering non-standard auto insurance to the Hispanic community. Relying on historical data, insurers had for years charged substantially higher prices for drivers with certain risk factors, including new or young drivers, drivers with low or no credit scores or drivers with an unusual driver’s license status. 

Infinity recognized that first-generation Latinos, who are not necessarily high-risk drivers, often have these unusual circumstances. Infinity reached out to Hispanic drivers offering affordable non-standard policies, bilingual customer support and sales agents. Infinity has grown to become the second-largest writer of non-standard auto insurance in the U.S. In 2018, Kemper paid $1.6 billion to acquire Infinity. 

Underserved communities offer great opportunities for expansion that are often missed or overlooked when relying solely on data sets and data inputs. 

Insurers must also actively manage AI and data inputs to avoid racial bias and look beyond demographics and race to segment out the best risks and determine the right price. As an industry, we have made significant progress toward removing bias. We cannot allow these fantastic tools and technologies to enable this harmful and unintended discrimination. We must not repeat these mistakes. 

No More Apples-to-Apples Comparisons

I don’t know about you, but if I never hear another client, or prospective client, say they “just want an apples-to-apples comparison,” it will be too soon! 

As insurance agents, we don’t just hate that expression, we also know it’s a terrible idea for our clients. They ask for the comparison simply because insurance is complex, and they want to simplify information so they can make a decision and get on with their business. Unfortunately, many in the agent community have cooperated with this poor risk management strategy, which serves neither the client nor the agency, whether for personal or commercial lines. It reduces insurance and risk transfer to a commodity, which it is not and never will be, and results in inadequate or inappropriate coverage that rears its ugly head when a claim arises. 

In the future, agents who cooperate with apples-to-apples quoting will struggle. To understand why, we need only look at how technology is changing the rules of doing business.

Technology-Driven Winners

Technology, driven largely by artificial intelligence, will make it possible for customers to be better-educated, not only on their risks but on the various risk transfer mechanisms available to them. Smart systems will allow both consumer and commercial insurance purchasers to match their needs with available policy coverage in new and unprecedented ways. Also, relentless pressure for improved bottom lines fostered by competition in the marketplace will put an ever-increasing spotlight on the cost of insurance, forcing businesses to make more informed decisions. All of this means that agents must up their game from a technological perspective to prosper. Fortunately, technology will help in at least two ways. 

First, the improving technical tools available to agents will make it easier for them to select specific policy coverage and language for unique client needs. And improving integration between agency management systems and carrier technology will allow better product selection. Within a few years, this integration will increasingly be done automatically, freeing agents’ time. Additionally, as insurance companies continue to learn how to analyze the massive data they are collecting, their pricing methodologies will change. It will become easier for them, and their agency partners, to propose bespoke coverage with tailored pricing for smaller and smaller risks.

Second, a technology that can make a profound difference in moving agents away from commoditized selling is virtual transportation systems. Think Zoom, Microsoft Teams and other widely adopted platforms. Dan Sullivan of the Strategic Coach points out that Zoom is really a transportation technology in that it allows you to transport yourself over endless distance, and enables face-to-face communication with virtually no time or expense. 

But Zoom and similar products are merely the Model T version. Within five years, there will be widespread adoption of augmented reality systems that allow full, 360-degree, three-dimensional, almost physical communication between people at any distance. Agents will be able to market much more broadly than ever before. Agents will be able to fine tune and narrow the niche or target markets in which they work. This will result in increased collaboration among agents, clients and insurance companies as all three seek to fine tune not only coverage, but pricing, as well. 

Agents who adopt these technologies and master them will win. They will write the most profitable business and experience the highest growth rates while leaving other agencies using old technology and outdated mindsets to increasingly fight over the less profitable scraps of business. While this future, which is coming rapidly, is exciting, it is also potentially frightening because busy agents often aren’t sure what to do to prepare. 

See also: 2021: The Great Reset in Insurance

Preparing for Change

The first thing to do to be ready for this impending future is simple: Master your agency management system (AMS) so that data is uniform and complete. Most agencies, according to all major AMS companies, use only a fraction of the software capabilities already at their disposal. Worse, agency employees are not consistent in how they enter, preserve and manipulate data. This data is the raw material for the customized coverage and pricing model of the future. But if it is not accurate, complete and consistent, that future will be much harder to achieve. So, agents should start now by learning how to maximize the capability that is already present in their AMS and working on data collection and discipline. 

A second cultural objective to consider is implementing and enforcing consistent, careful annual coverage reviews with both prospects and clients. While this is standard practice in many agencies, it is often overlooked or involves only a cursory review of changes in business exposure or coverage needs. In the future, when clients know more about their own risks and coverage options, this won’t be adequate. Agents should begin now to increase their thoroughness. 

Third, understand, use and maximize your current carrier’s technology tools. Hartford Insurance Senior Vice President Matthew Kirk said in a recent podcast that using the tools that carriers already provide is one of the biggest opportunities for both agents and companies to reduce costs, increase speed and deliver appropriate solutions. By having serious conversations with carriers about capabilities, agencies can find another way to prepare for a future in which technology increasingly dominates competitiveness.

Finally, agencies should consider adding tools now from those that already exist. For example, many agencies find that tools like Risk Match allow them to do a better and faster job of matching client risk to carrier appetite. And tools like ModMaster allow agents to help their clients understand what drives their workers’ compensation costs and allows for agent/client conversations to move past price — to collaboration on risk reduction and cost elimination. There are many other similar tools in the market now that may be of use to agencies and their specific situation. The key is to become aware of these tools and add them to your arsenal as soon as possible. 

Taking these steps, which appear deceptively simple, will prepare agencies for a future in which the client/agent conversation shifts from fruit comparisons to one that is more like the tailor and his clients while preparing a bespoke suit.

AI and Discrimination in Insurance

This past summer, a group of African-American YouTubers filed a putative class action against YouTube and its parent, Alphabet. The suit alleges that YouTube’s AI algorithms have been applying “Restricted Mode” to videos posted by people of color, regardless of whether those videos actually featured elements YouTube restricts, such as profanity, drug use, violence, sexual assault or details about events resulting in death. The lawsuit alleges that this labeling has occurred through targeting video keywords like “Black Lives Matter,” “BLM,” “racial profiling,” “police shooting” or “KKK.” YouTube says its algorithms do not identify the race of the poster.

Whether the allegations are true or not, the case illustrates AI’s potential for inadvertent discrimination. It is easy to see how an algorithm could learn to use variables seemingly unrelated to race, sex, religion or another protected class to predict the outcomes it was designed to target. In the YouTube example, we could imagine the algorithm noting a link between the mentioned keywords and videos depicting violence, thus adding the keywords to factors it weighs when deciding whether Restricted Mode should be applied to a given video. The algorithm is simply programmed to restrict sequences containing violence, but in such a situation it could end up illegally restricting videos posted by African-American activists that depict neither.

In response to such potential pitfalls, the NAIC this past August issued a set of principles regarding AI. The set includes principles about transparency, accountability, compliance, fairness and ethics. The only way to ensure compliance, fairness and that ethical standards are maintained is for AI actors to be accountable for the AI they use and create — and the only way for these actors to properly monitor their AI tools is by ensuring transparency.

As Novarica’s most recent joint report with the law firm Locke Lord on insurance technology and regulatory compliance notes, all states follow some version of the NAIC’s Unfair Trade Practice Act (“Model Act”), “which prohibits, generally, the unfair discrimination of ‘individuals or risks of the same class and of essentially the same hazard’ with respect to both rates and insurability.” There are many possible insurance use cases that AI and data-based technology enable, like analytics-driven targeting, pre-underwriting, rules-based offer guidance and pre-fill data. Although these capabilities can be delivered without AI, the effort required to do so has historically been prohibitive, meaning that using AI will be essential in the coming years — as will ensuring that AI does not discriminate against protected classes.

A key area for insurers to monitor is the use of third-party data in underwriting processes that may not be directly related to the risk being insured. A good example of this is credit score, the use of which several states have restricted during the pandemic. NYDFS’s Circular No. 1 lists other external consumer data and information sources for underwriting that have “the strong potential to mask the forms of [prohibited] discrimination… Many of these external data sources use geographical data (including community-level mortality, addiction or smoking data), homeownership data, credit information, educational attainment, licensures, civil judgments and court records, which all have the potential to reflect disguised and illegal race-based underwriting.” Insurers must thus have transparency into what factors an algorithm is considering and how it arrives at decisions, and they must be able to adjust the included factors easily.

What will the regulatory future hold? Benjamin Sykes of Locke Lord foresees new model regulations requiring data to be subject to regular calls on underwriting criteria and risk-scoring methods, certification by insurers that the proper analysis to avoid any material disparate impact has been performed and a penalty regime focused on restitution above and beyond the difference in premium to those hurt by an algorithm’s decisions.

CIOs will need to consider how to handle the evolution of various regulations as they arise and their implications for how third-party data is used, how machine-learning algorithms are developed and applied and how AI models “learn” to optimize outcomes. Both the regulations and the technology are moving targets, so CIOs and the insurers they represent must keep moving, too.

A Breakthrough in AI

You may have seen articles last week about a breakthrough for artificial intelligence in medicine that managed to be both arcane and exciting at the same time. Google’s DeepMind research arm solved a 50-year-old problem related to predicting how proteins fold themselves — news only for geeks, right? Think again. Understanding how these chains of amino acids fold themselves into 3D shapes, providing the structural components for the tissues in our bodies, opens up all sorts of possibilities for exploring our inner workings and for rapid development of drugs.

What I haven’t yet seen explained — amid all the speculation about just how many Nobel Prizes in Medicine will spring from the work — is that the type of AI that DeepMind developed to solve the protein-folding conundrum should also provide breakthroughs in insurance. This type of AI can take dead aim at some core issues in insurance, especially in underwriting and claims.

AI is funny. It tends to be talked about as a single thing, but it’s really a whole bunch of things, pushing against limits in a wide range of directions. And some of the progress is flashy without being all that important.

For instance, when IBM’s Watson defeated the greatest Jeopardy champions in 2011, IBM talked about sending Watson to medical school. After all, if it could beat Ken Jennings, what couldn’t it do? But Watson’s breakthrough was in natural language processing, a great advance if you want to be able to talk to a computer but little help if you’re trying to cure cancer. Similarly, when DeepMind beat the world champion at Go in 2017, the event made for fun headlines but not much more. The AI is terrific for any setting where there are a small number of rules and where the computer can play games against itself ad infinitum to optimize its approach, but how many real-world situations fit that description?

By contrast, what DeepMind accomplished in solving the protein-folding problem is of deep significance because the approach the scientists used — known as supervised deep learning — can be applied to so many business situations, including in insurance.

Without getting too deep into the details (which you can find in this excellent piece in Fortune, if you want to geek out like I did), the scientists faced a problem far more complex than businesses face: trying to figure out how a protein folds itself, in the milliseconds after it is created, based on a host of forces. While we’ve been able to sequence the human genome for more than 15 years now, you also have to know how the string of amino acids folds, because the shape determines so much of how the protein behaves.

Although a famous conjecture in 1972 said it should be possible to predict a protein’s shape just from the sequence of amino acids in it, the computation had proved to be too complex. Instead, the shape of a protein had to be determined through a complex chemical process and, often, through the use of a special type of X-ray produced by a synchrotron the size of a football stadium. The process could take a year and cost $120,000, for a single protein.

(I realize I may be giving you flashbacks to high school biology and chemistry and perhaps some unpleasant memories, but I’m just about done with the science and am getting to the implications for insurance.)

What the scientists had going in their favor were two things: a sort of answer key, because of some 170,000 proteins whose shape had already been determined experimentally, and some coaching tips that could help the AI focus on the key variables.

That starts to sound like a business situation, especially, in terms of insurance, in claims and underwriting. If you want to train an AI to take over tasks, you have underwriters and adjusters who can tell you what the right answer is and who can guide the AI’s self-training by steering it toward certain variables. Over time, that AI can become as good or better than a human at, say, looking at photos of the damage in a car accident and estimating the damage.

At least, that’s how it worked for DeepMind on a much harder problem. On a scale where 100 is perfect accuracy, the previous best AIs scored about 50, well below empirical methods, which scored about 90. But in a recent competition in which AIs predicted the shape of proteins whose forms had been determined experimentally but had yet to be published, DeepMind’s median score was 92 — a computer prediction outscored that year-long, expensive, physical process. Importantly, DeepMind’s AI can tell scientists how confident it is about each prediction, so they know how heavily to rely on it.

The immediate application for the DeepMind AI will, of course, be in medicine. There are some 200 million proteins whose shapes haven’t yet been determined, and the AI can quickly go to work on those. (The required computing power is only perhaps 200 of the graphics chips used in a PlayStation.) Understanding the shapes will help researchers see what drugs might interact with which proteins, potentially reducing drug development time by years and lowering costs by hundreds of millions of dollars.

However, how this AI moves into the mainstream remains to be determined. DeepMind functions as a research arm of Google, not as a business, and has promised to ensure that the software will “make the maximal positive societal impact,” but you could hardly blame Google if it tried to recoup the development costs through charges to Big Pharma. Only once this AI filters through medicine will it, I imagine, spread to other business problems, such as those that insurance faces.

For me, it’s enough to know at the moment that this sort of AI is possible, because that means that a lot of smart people will accelerate their efforts to bring supervised deep learning to insurance. While the wins at Jeopardy and Go were startling, the AI that solved the protein-folding problem will prove to be far more consequential.

Stay safe.


P.S. Here are the six articles I’ll highlight from the past week:

Smart Contracts in Insurance

Smart contracts will likely be used first for simpler insurance processes like underwriting and payouts, then scale as technology and the law allow.

Time to Try Being an Entrepreneur?

With businesses cutting back, many are asking that question. But there are huge misconceptions about how to think about the issue.

Surging Costs of Cyber Claims

With home-working widespread because of COVID-19, security around access and authentication points is critical.

4 Stages of Dominance in Performance

Chances are, you have natural gifts. However, many of the skills you need must be developed, nurtured and maintained intentionally.

Vintage Wine? Sure. But Vintage Tech?

Legacy systems that have evolved over long periods can be bloated and far less efficient and cost-effective than more modern technologies.

Do Health Plans Have the Right Data?

Health plans strive to deliver efficiency and great customer experiences and improve care outcomes. But what data are they missing?