Tag Archives: ai

How AI Will Define Insurance Workforce

Prior to COVID-19, the U.S. boasted historically low unemployment and a roaring economy. Nearly every industry was expected to face a severe talent shortage within the next 10 to 20 years. But then March hit, and the world turned upside-down.

Since then, the pendulum has swung in the other direction. The current unemployment figures are reporting as many as 10.7 million people are out of work, and, despite this sudden abundance of available workers, staffing issues remain — they’ve just become more complex.

To navigate this wildly fluctuating environment, companies will rely on data for decision-making about hiring, training and countless other matters that affect the bottom line. This will require tools, like artificial intelligence (AI), to make sense of data and to adjust quickly amid uncertainty.

The best way to examine AI’s value in today’s uncertain world is to look at how it can work within a specific industry. Doing so makes it possible to show practical applications from which lessons can then be applied to other industries.

Commercial Insurance: A Case Study

Like other industries, commercial insurance faced a significant hiring crisis pre-COVID-19. The average claims adjuster remained in the industry for just four years — about the time it takes to gain full expertise — and those workers who stuck with claims have inched closer to retirement. So, this multibillion-dollar market is at risk of losing much of the human brain trust that enables current systems to run, as new workers cannot be hired, trained and retained fast enough to balance the scales.

Fast forward to today. Commercial insurance looks markedly different. The types and volumes of claims are changing. For example, claims related to COVID-19 contact or work-from-home circumstances are rising quickly, as are post-termination claims, while traditional claims have dropped.

At the same time, access to traditional healthcare has been in flux. To combat the limitations on available providers, telehealth solutions have exploded, opening up a whole new set of providers that claims reps need to become somewhat familiar with to facilitate claims accordingly — claims that bear a greater potential for fraud and litigation, which cost companies millions of dollars each year.

In short, almost everything about claims operations has changed — and, like many other industries that have been traditionally slow to adapt to new challenges, commercial insurance faces real hurdles.

The Importance of Data and Intelligence

Data is the key to overcoming dramatic changes within a relatively static industry. Maintaining a pulse on what’s happening across a business, or with a specific claim, and how it relates to things experienced previously is important; spotting trends early is vital. Organizations require data to determine if their plans and practices are working — and, if they are not, data should be used to drive intervention and adaptation.

But thousands to millions of data points alone won’t save the day if an organization doesn’t have the capability to understand what the data is telling them. What is the context? How are points connected? If a trend continues, what will be the effects six months or two years from now?

AI systems unlock the meaning of data to make it useful, pinpointing where organizations need to make adjustments. In commercial insurance, AI could allow for expanding provider networks to offer better, faster access to care. To actually expand networks using quality providers, systems need to tap into more data to learn which providers have achieved the best outcomes on which types of cases.

What is particularly exciting about implementing AI in this rapidly changing environment is that interpretations of data are not fixed. Machine learning capabilities are constantly refining and updating insights so that organizations — and their people — can respond accordingly.

See also: How AI Transforms Risk Engineering

Designing the Future Workforce

So, if data analytics and AI become staples in modern business, how do they solve the human resource problem? What do they mean for the future workforce? The answer is threefold.

Data determines what your hiring needs actually are: In a world that is changing so quickly, your business might not need as many people specialized in a certain area, whereas new opportunities or divisions may emerge. Your business may be forced to alter its offerings to match customer needs. Data is the guide; it lets you home in on exactly what skills are required.

AI guides training: Because AI is able to analyze so much data so quickly, new hires are able to access the information and prompts they need to do their jobs well as soon as they need it. There is not as much feeling around or dependency on senior colleagues. This is not to discount the value of experience, but it means that workers can reach a competent level much faster; what they lack in experience and intuition is replaced by data-driven insights and standardized practices.

AI augments jobs: AI solutions take care of many of the rote tasks workers are routinely bogged down with today. As a result, employees can focus on making more efficient, informed decisions; they can actually use their brains more. AI flags potential errors or problems so that they can be addressed before they escalate. Reps can focus on delivering compassion at a time when people need it most.

While COVID-19 has fundamentally altered the future workforce, tools like AI help get it back on track. In leveraging it effectively, organizations will become nimbler and more responsive to conditions while employees are more knowledgeable and effective.

Beware the Dark Side of AI

Within the Biden administration’s first weeks, the Office of Science and Technology Policy has been elevated to a cabinet-level position. Biden has appointed Alondra Nelson as deputy director. She is a scholar of science, technology and social inequality. In her acceptance speech, Nelson shared, “When we provide inputs to the algorithm, when we program the device, when we design, test and research, we are making human choices.” We can expect artificial intelligence (AI) bias, ethics and accountability to be more significant issues under our new president. 

The financial services industry has a long and dark history of redlining and underserving minority communities. Regardless of regulation, insurers must take steps now to address the ethical concerns surrounding AI and data. 

Insurers are investing heavily and increasingly adopting AI and big data to improve business operations. Juniper Research estimates the value of global insurance premiums underwritten by AI will exceed $20 billion by 2024. Allstate considers its cognitive AI agent, Amelia, which has more than 250,000 conversations per month with customers, an essential component of its customer service strategy. Swiss Re Institute analyzed patent databases and found the number of machine-learning patents filed by insurers has increased dramatically from 12 in 2010 to 693 in 2018. 

There is no denying that AI and big data hold a lot of promise to transform insurance. Using AI, underwriters can spot patterns and connections at a scale impossible for a human to do. AI can accelerate risk assessments, improve fraud detection, help predict customer needs, drive lead generation and automate marketing campaigns. 

However, AI can reproduce and amplify historical human and societal biases. Some of us can still remember Microsoft’s disastrous unveiling of its new AI chatbot, Tay, on social media site Twitter five years ago. Described as an experiment in “conversational understanding,” Tay was supposed to mimic the speaking style of a teenage girl, and entertain 18- to 24-year-old Americans in a positive way. Instead of casual and playful conversations, Tay repeated back the politically incorrect, racist and sexist comments Twitter users hurled her way. In just one day, Twitter had taught Tay to be misogynistic and racist. 

In a study evaluating 189 facial recognition algorithms from more than 99 developers, the U.S. National Institute of Standards and Technology found algorithms developed in the U.S. had trouble recognizing Asian, African-American and Native-American faces. By comparison, algorithms developed in Asian countries could recognize Asian and Caucasian faces equally well.

Apple Card’s algorithm sparked an investigation by financial regulators soon after it launched when it appeared to offer wives lower credit lines than their husbands. Goldman Sachs has said its algorithm does not use gender as an input. However, gender-blind algorithms drawing on data that is biased against women can lead to unwanted biases. 

Even when we remove gender and race from algorithm-models, there remains a strong correlation of race and gender with data inputs. ZIP codes, disease predispositions, last names, criminal records, income and job titles have all been identified as proxies for race or gender. Biases creep in this way. 

See also: Despite COVID, Tech Investment Continues

There is another issue: the inexplicability of black-box predictive models. Black-box predictive models, created by machine-learning algorithms from the data inputs we provide, can be highly accurate. However, they are also so complicated that even the programmers themselves cannot explain how these algorithms reach their final predictions, according to an article in the Harvard Data Science Review. Initially developed for low-stakes decisions like online advertising or web searching, these black-box machine-learning techniques are increasingly making high-stakes decisions that affect people’s lives. 

Successful AI and data analytics users know not to go where data leads them or fall into the trap of relying on data that are biased against minority and disadvantaged communities. Big data is not always able to capture the granular insights that explain human behaviors, motivations and pain points. 

Consider Infinity Insurance, an auto insurance provider focused on offering non-standard auto insurance to the Hispanic community. Relying on historical data, insurers had for years charged substantially higher prices for drivers with certain risk factors, including new or young drivers, drivers with low or no credit scores or drivers with an unusual driver’s license status. 

Infinity recognized that first-generation Latinos, who are not necessarily high-risk drivers, often have these unusual circumstances. Infinity reached out to Hispanic drivers offering affordable non-standard policies, bilingual customer support and sales agents. Infinity has grown to become the second-largest writer of non-standard auto insurance in the U.S. In 2018, Kemper paid $1.6 billion to acquire Infinity. 

Underserved communities offer great opportunities for expansion that are often missed or overlooked when relying solely on data sets and data inputs. 

Insurers must also actively manage AI and data inputs to avoid racial bias and look beyond demographics and race to segment out the best risks and determine the right price. As an industry, we have made significant progress toward removing bias. We cannot allow these fantastic tools and technologies to enable this harmful and unintended discrimination. We must not repeat these mistakes. 

How to Put a Stop to AI Bias

Imagine you were suddenly refused insurance coverage, or your premium increased 50% just because of your skin color. Imagine you were charged more just because of your gender. It can happen, because of biased algorithms.

While technology improves our lives in so many ways, can we entirely rely on it for insurance policy?

Algorithmic Bias

Algorithms will most likely have flaws. Algorithms are made by humans, after all. And they learn only from the data we feed them. So, we have to struggle to avoid algorithmic bias — an unfair outcome based on factors such as race, gender and religious views.

It is highly unethical (and even illegal) to make decisions based on these factors in real life. So why allow algorithms to do so? 

Algorithmic Bias and Insurance Problems

In 2019, a bias problem surfaced in healthcare. An algorithm gave more attention and better treatment to white patients when there were black patients with the same illness. This is because the algorithm was using insurance data and predictions about which patients are more expensive to treat. If algorithms use biased data, we can expect the results to be biased.

It doesn’t mean we need to stop using AI — but, rather, that we must make an effort to improve it.

How Does Algorithmic Bias Affect People?

Millions of people of color were already affected by algorithmic bias. This bias mostly occurred in algorithms used by healthcare facilities. Algorithmic bias has also influenced social media.   

It is essential to keep working on this problem. In the U.S. alone, algorithms manage care for about 200 million people. It is difficult to work on this issue because health data is private and thus hard to access. But it’s simply unacceptable that Black people had to be sicker than white people to get more serious help and would be charged more for the same treatment. 

How to Stop This AI Bias?

We have to find factors beyond insurance costs to use in calculating someone’s medical fees. It’s also imperative to continually test the model and to offer those affected a way of providing feedback. By acknowledging feedback every once in a while, we ensure that the model is working as it should. 

See also: How to Evaluate AI Solutions

We have to use data that reflects a broader population and not just one group of people — if there is more data collected on white people, other races may be discriminated against.

One approach is “synthetic data,” which is artificially generated and which a lot of data scientists believe is far less biased. There are three main types: data that has been fully generated, data that has partially been generated and data that was corrected from real data. Using synthetic data makes it much easier to analyze the given problem and come to a solution.  

Here is a comparison: 

If the database isn’t big enough, the AI should be able to input more data into it and make it more diverse. And if the database does contain a large number of inputs, synthetic data can make it diverse and make sure that no one was excluded or mistreated. 

The good news is that generating data is less expensive. Real-life data requires a lot more work, such as collecting or measuring data, while synthetic data can rely on machine learning. Besides saving a lot of money, synthetic data also saves a lot of time. Collecting data can be a really long process.

For example, let’s say we are operating with a facial recognition algorithm. If we show the algorithm more examples of white people than any other race, then the algorithm will work best with Caucasian samples. So we should make sure that enough data has been produced that all races are equally represented.

Synthetic data does have its limitations. There isn’t a mechanism to verify if the data is accurate.

AI is obviously having a significant role in the insurance sector. By the end of 2021, hospitals will invest $6.6 billion in AI. But it’s still essential to have human involvement to make sure the algorithmic bias doesn’t have the last say. People are the ones that can focus on making algorithms work better and overcoming bias.

See also: How AI Can Vanquish Bias

Explainable AI

Because we can’t entirely rely on synthetic data, a better solution may be something called “explainable AI.” It is one of the most exciting topics in the world of machine learning right now.

Usually, when we have a certain algorithm doing something for us, we can’t really see what’s going on in the work with the data. So can we trust the process fully?

Wouldn’t it be better if we understood what the model is doing? This is where explainable AI comes in. Not only do we get a prediction of what the outcome will be, but we also get an explanation of that prediction. With problems such as algorithmic bias, there is a need for transparency so we can see why we’re getting a specific outcome. 

Suppose a company makes a model that decides which applications warrant an in-person interview. That model is trained to make decisions based on prior experiences. If, in the past, many women got rejected for the in-person interview, the model will most likely reject women in the future just because of that information.

Explainable AI could help. If a person could check the reasons for some of these decisions, the person might spot and fix the bias. 

Final words

We need to remember that humans make these algorithms and that, unfortunately, our society is still battling issues such as racism. So, we humans must put a lot of effort into making these algorithms unbiased.

The good news is that algorithms and data are easier to change than people.

Rise of ‘Product-ism,’ Fall of ‘Project-ism’

When it comes to AI, machine learning and advanced analytics, there is one undeniable conclusion: You need to get there now. The biggest risk in AI today is not implementing AI.

Data can stream from devices, channels, exchanges and other points of origin (e.g., phones, drones, homes, vehicles, inspectors, adjusters, etc.) both continuously and on demand.

This makes AI more of a pipeline than a product. Data pours into the pipes and forms streams of information via data identification, transformation, verification and authentication and is combined with additional data to permit decisions across the insurance value chain.

Sometimes, a process can be fully digital and self-serviced. Sometimes, an AI assist happens. Other times, a human is brought into the loop. Often, the human in the loop is also assisted with AI and analytics. New ways to let a customer do tasks remotely are expedited by AI.

Most companies today seem to implement AI solutions across their organizational chart use cases with three strategies: buy before build, collaborate with vendors and customize and invest in internal AI building efforts. Leading-edge companies have progressed from pilots and experimentation sandboxes all the way through the analytics operations pipeline journey — where data and AI operations engineers route data streams to fusion engines, then decision engines, then user endpoint actions in real time.

But many companies are struggling with ”early days” issues: data governance, privacy, security, cloud management, upskilling, model risk management and AI operations lifecycle management. This is a natural consequence of viewing AI initiatives as small projects rather than a product requiring continuing maintenance and long-term investments.

Getting AI from the sandbox to production means upping the readiness of IT teams to provision, stream, protect and operate AI systems as they move from an analytic project and proof of concept into a product. Steady governance and a cultural maturity for data-driven decisions will help you become successful and remain successful. Sunsetting “project-ism” is the new call to action for AI and emerges as essential to exceptional experiences with data-driven decision making.

You can find the full report here.

The Next Wave of Insurtech

Long before the COVID-19 pandemic, insurers were investing in digital transformation, spurred by the rise of startups. Those investments took on new urgency as the pandemic forced businesses across industries to move to digital operations to stay afloat. 

Over the long term, no technology will prove as vital to insurers’ agility and success as artificial intelligence, whose far-reaching impact will define the next wave of insurtech innovation.

Legacy players and nascent startups alike will leverage AI and machine learning to enhance customer service, speed claims processing and improve the accuracy of underwriting – enabling insurers to match customers to the right products, operate with greater efficiency and achieve better results.

Though insurance is often cast as slow to embrace technology and innovation, in a certain respect AI is very much within the industry’s wheelhouse. Since the first actuaries began their work in the 17th century, insurance has relied heavily on data – and as AI empowers insurers to do even more with vast swaths of data, the benefits will redound to providers and policyholders alike.

Bringing Customer Service to the Next Level

In today’s digital economy, personalization is all the rage. Customers crave tailored, relevant experiences, offers and promotions that reflect their unique backgrounds, needs and interests – and they increasingly expect businesses to deliver these experiences as a basic standard of service.

While personalization is often discussed in the context of sectors like e-commerce, the insurance industry is no exception to this trend. According to an Accenture survey, 80% of customers expect their insurance providers to customize offers, pricing and recommendations. 

Of course, delivering bespoke experiences requires an abundance of customer data – and customers are more than willing to provide it in exchange for personalized service; 77% told Accenture that they’d share their data to receive lower premiums, quicker claims settlement or better coverage recommendations. 

Because personalization can only deliver on its promise if it’s holistic and omnichannel, the most successful insurers will be those that don’t view personalized engagements as one-offs – a tailored email here, a promotion there – but that consistently provide personalization at every stage of the customer journey. 

What will that look like in practice? AI chatbots will become a lot more “chat” and a lot less “bot,” not only providing 24/7 customer service but also using cutting-edge methods like natural language processing (NLP) to better understand what customers actually need and to conduct more natural, intuitive conversations. Underwriting will become much more precise as machines crunch massive sets of data – reams of usage and behavioral data generated by customers and their IoT devices, as well as relevant geographic, historic and other information – to create customized policies that reflect a policyholder’s true level of risk. 

See also: Insurtechs’ Role in Transformation

From Cumbersome to Swift

Harnessing the power of AI, insurers can also streamline claims processing as part of a comprehensive digital strategy. Forward-thinking providers will increasingly integrate automated customer service apps into their offerings. These apps will handle most policyholder interactions through voice and text, directly following self-learning scripts that will be designed to interface with the claims, fraud, medical service and policy systems. 

As a McKinsey analysis noted, with automated claims processing, the turnaround time for settlement and claims resolution will start to be measured in minutes rather than days or weeks. Meanwhile, human claims management associates will be free to shift their focus to more complicated claims, where their insights, experience and expertise are truly needed. 

These transformative applications of AI will unlock revenue opportunities, improve risk management and help insurers deliver a new level of personalized customer service. But if AI will act as the great enabler, what will enable AI itself?

The answer lies in a robust digital core, which is vital to facilitating efficient business processes, maintaining resilience in an unpredictable world and supporting the rollout of new products and business offerings. Whether insurers manage to achieve that kind of digital agility will determine their ability to survive and thrive in a landscape that’s shifting faster than ever.