Tag Archives: machine learning

Top Problems That AI, ML Help Solve

The global life insurance and retirement industry is facing an inflection point due to the convergence of challenging economic, technological, competitive and societal headwinds. Product-driven business models of the past will not be sustainable because insurers cannot adapt quickly enough to changing customer needs. This problem is on top of mature markets, strict regulatory requirements, low interest rates and tight margins. The COVID-19 pandemic has made it even more urgent for life insurers to redefine their role, take bold measures and address these changes.

The good news is that many global insurance leaders are already making large investments in digitization, innovation and cultural change. Going digital has been a top priority, as it helps reduce cost and enhances customer experiences, leading to the increasing adoption of predictive analytics, artificial intelligence (AI) and automation in various business functions in the industry. According to McKinsey estimates, the potential total value of AI and analytics across the insurance vertical is approximately $1.1 trillion.

Soon, AI will be deeply embedded into the insurance value chain, providing unmatched power to insurers: automating manual processes in underwriting, eliminating errors and inefficiencies in claims processing and enabling predictive insights to deliver superior outcomes. Below are the top challenges that AI and machine learning (ML) will help solve in the insurance industry. 

1. Underwriting and Pricing — While pricing personal auto policies is mostly automated today, the underwriting process is still manual for commercial property. For commercial property insurance, the underwriter needs a lot of information, such as occupancy, data on adjacent buildings, loss estimates and typical hazards. Some of the data may be available online but may be outdated and might require onsite verification. This is why human judgment is critical. A PwC report on top insurance issues noted that carriers are devoting considerable attention to helping underwriters use models and AI-driven tools to supplement their knowledge. Underwriters are becoming increasingly comfortable marrying what they’ve learned from personal experience with insight from models to make the most informed decisions possible. Soon, underwriting will be fully automated, supported by machine learning models that ingest vast amounts of data through an ecosystem of vendors. 

2. Claims Processing — In the future, machine learning algorithms will manage claims routing, increasing efficiency and accuracy dramatically. According to a McKinsey report, claims for personal lines and small-business insurance will be fully automated, enabling carriers to achieve straight-through-processing rates of more than 90% and dramatically reducing processing times from days to hours or minutes. Unlike with the traditional practice, involving manual methods of first notice of loss, the burden will no longer be on the customer to inform the insurance carrier about an event. The process will now be automated, relying on  IoT sensors and real-time monitoring to prevent incidents from happening and sending notifications for critical events requiring immediate attention. An app on a smartphone will handle all interactions, with the capability to trigger claims automatically upon loss. Other technologies will support claims processing, such as natural language processing, deep learning and text analytics. 

See also: Wake-Up Call on Ransomware

3. Fraud Detection — Insurance fraud can cost companies millions to billions of dollars, as there are thousands of claims filed every day. Assigning insurance agents to investigate each case will be time-consuming and expensive. Using AI, insurers can evaluate millions of documents and data points in record time. They can cross-reference several databases and incorporate multiple external data sources, which would be impossible without automation. Anomaly detection models can identify deviations and flag cases for review. Leveraging learnings from previous fraud cases and using real-time data, AI and ML models can identify threat signals before they might become a more substantial problem. 

4. Other Use Cases — A common use case is using predictive analytics for estimating policy cancellations. Customer churn is one of the most problematic aspects of customer management for insurance companies. When high-value customers churn, insurance companies often replace existing businesses with new, more costly customers that lower profitability. Creating AI and ML models that can accurately forecast churn behavior can boost profitability and revenues. 

As insurance carriers get better at leveraging data and implementing predictive analytics, the focus will shift from product-led to customer-centric models. The insurance industry’s adoption and investment in digital capabilities to unify data, advanced analytics and people will ultimately make the industry more agile, efficient and transparent. The winners that go above and beyond will start to offer personalized products based on individual customers’ unique needs and enhanced customer experience.

How to Deliver the ROI From AI

For insurance companies, there’s a constant influx of data from almost everywhere: customers, marketing teams, sales representatives, underwriting departments, HR and more. These massive amounts of data can be used to make your company better, or so you’ve been told. But harnessing business value from this data isn’t as easy as it might seem. It takes more than collecting data and building models for AI to help a business.

In the last few years, a technology has emerged that can harness AI across all departments of a business like never before, enabling massive, company-wide returns. However, the technology alone isn’t enough; there must be the right combination of technology, people and process.

Feature Stores for Machine Learning

Data scientists love to dive deep into different algorithm alternatives, but the most effective way to get better predictive signals is to get the right data. For example, in media personalization, companies often used the fact that a particular user visited a particular site (like a luxury shoe brand) as an important data point. But this is deceptive. Recency also matters. If a visit to a particular site has been within, say, the last 48 hours, you get significantly better conversion on ads. You have to get the right data points represented to get a model to perform!

Data points that inform models are known as features. These are usually transformed data attributes, which together form the feature vectors that are the input to machine learning algorithms. The process of turning raw data into features is called feature engineering, and is — in my opinion — the critical success factor for practical ML projects that deal with corporate structured data.

Not only is feature engineering essential for model accuracy, it’s also incredibly time-intensive for data scientists. Data preparation takes 80% of data scientists’ time, which means they only have 20% left to actually build, test and implement models. This makes it incredibly difficult and expensive to build models at the volume that would be necessary to provide value for every department of an insurance company.

Technology leaders like Uber, Google and Airbnb have spent years and millions of dollars designing infrastructure that makes it possible to unleash the power of AI throughout a company. The solution they have all converged on is a feature store.

A feature store is a central repository that stores features, data lineage and metadata associated with all the machine learning models in a company. In essence, it is a single source of truth for all of the data science work within one organization. Being able to share and re-use features boosts data science productivity by cutting down duplicate work and making it easy for data engineers, data scientists and ML engineers to collaborate. Each machine learning model becomes cheaper and easier to produce. (If you want to learn more about why that is, there’s a more in-depth resource here.)

See also: 6 Implications of Big Data for Insurance

Integrate Diverse Skill Sets in Data Science Teams

Even though feature stores are incredibly powerful tools, they are ultimately still tools, which means how they’re used will influence how helpful they are. Even with a feature store bridging the gaps inside a company, a “siloed” data science structure makes it hard to truly integrate AI into the enterprise.

Traditionally, the people who can manage large volumes of data and “do the math” of machine learning are sitting in their silos. They are away from the action — where the application interacts with customers, suppliers and employees. They are one step removed from the business. 

But the AI or data science team is not equipped to get the job done independently. They simply do not have enough knowledge about the business or the applications that will deploy the models to lead to production applications that deliver business outcomes. The secret sauce to a successful AI implementation is diversity. Data scientists need to work side by side with people who know the business and the application from inception to completion. 

Culture of ML Experimentation

Machine learning projects need to include more than just subject matter experts and application developers as part of the data science and data engineering teams. To do ML well, you have to create a culture of experimentation within your data science team. 

Markets change, bad actors innovate, the climate changes, the competitors change and so much more. What was the perfect feature vector on go-live might produce noise two months later, or worse — tomorrow. You must realize that an ML project will not thrive with a hands-off approach; it is a process of continuous experimentation and continuous improvement. So the secret is to keep the diverse team intact, frequently evaluating the deployed models, and able to experiment with new features.

See also: Insurance Outlook for 2021


The technologies and organizational silos of the past weren’t made to embed AI into the fabric of organizations, and as a result, companies that aren’t innovating aren’t benefiting from the full power of AI.
To inject AI throughout a company, the goal needs to be the continuous improvement of business outcomes.

You can achieve this by optimizing the two bottlenecks of the operational process:. First, overcome the feature bottleneck of the ML lifecycle with a feature store. Second, overcome the organizational bottleneck of the technology lifecycle by distributing data experts in every department of your company. Your teams will finally be able to demonstrate a significant ROI from your AI.

Case Study on Using AI in Workers’ Comp

Australia is home to a well-developed workers’ compensation system. Each state determines the design of its scheme, with some being privately underwritten by insurers and others being state-run. Claims across territories vary by industry, injury and complexity. As such, insurers need systems that can enable quality, efficient handling of claims to facilitate the health of injured parties and can get them back to work as quickly as possible.

Approximately three years ago, QBE’s Australia Pacific division, like many other insurers, was running what we would describe as a “process-compliant business” when it came to workers’ comp claims. Leadership wanted to do more to eliminate manual processes and take advantage of claims adjusters’ expertise to get the best result for customers and their employees. They knew technology was the key.

Three Core Issues

QBE had long valued the principle of getting the right claim to the right adjuster based on areas of expertise. But to spot complexities early, claims teams engaged in what I refer to as our manual triage system. Expert adjusters did a cursory look at claims as soon as they were lodged, to identify potential risks based on very simple criteria — in particular, was the employee missing work? Simply put, we needed a better way to get claims routed and assessed from the earliest stages.

Our leadership team also wanted to figure out how to lighten adjuster caseload. As is common across the industry, adjusters may handle as many as 70 to 80 claims at a time. With this volume, it was incredibly difficult to spot the more complex or problematic claims, the ones that require the most attention. QBE was seeking a tool that could surface this information quickly and easily.

Additionally, the team was committed to identifying a better way to conduct quality reviews. Instead of manually selecting which claims to examine, which is very time-consuming, we wanted to add artificial intelligence to the mix.

AI Intrigue

As QBE prepared to set its strategic initiatives for the next few years, data analytics was prioritized. With more detailed information, adjusters and leadership could make better decisions about how to route claims, what required attention and how to ensure efficient, positive resolution.

We considered building a solution in-house but quickly realized that it would take a considerable amount of time and staff resources to construct a system that mapped to our priorities. We started engaging with many of the big data and analytics consultancies, hopeful that they would be able to help. They didn’t fit the bill, either.

See also: COVID-19’s Impact on Delivery of Care

In the summer of 2017, I ran across an article about how CLARA Analytics applied machine learning to workers’ comp claims. The approach, which leveraged artificial intelligence (AI) to identify claim issues and keep them from escalating while helping to close simple claims faster, made sense. As I examined how the models worked and how the software visualizes workload allocation, I recognized that it was the way we wanted to run our business and that CLARA had a sizeable lead over what QBE could build internally.

Clear Benefits

Once we started to get past people’s reluctance to use AI, they began to understand how an AI system could make their jobs easier — the models not only saved countless hours of manual work but their accuracy made decision-making significantly easier.

The financial benefits associated with an adoption of such software have been significant. The initial reports estimate that product integration will easily deliver a 5:1 return on investment, and that could turn out to be conservative, given that the savings will extend across QBE’s entire workers’ comp portfolio.

QBE has been able to implement a more focused approach to quality assurance. Gone are the random selections of claims. Instead, we take the lead from this new system, which provides a much higher level of confidence that the review team is looking into the claims that need it most.

We believe that quality assurance shouldn’t be driven by art; it should be driven by analytics, which is exactly what we’ve been able to accomplish.

In addition to the new-found efficiencies and claim insights, we have enjoyed the competitive differentiation provided to our sales team. They love being able to showcase how QBE uses industry-leading technology to improve claims operations at multiple levels.

See also: An AI Road Map to the Future of Insurance

Continuing Collaboration

Our partnership has allowed us to enhance the software’s capabilities to create significant advancements for our industry. For example, several months ago, both QBE and CLARA started collecting perception data from each injured person’s claim, such as how they feel about their recovery. Today, we are able to collect and analyze that information at scale.

People have been talking about psychosocial flags for injury recovery for more than 20 years, and no one has solved the problem. But taking in extra data points and using them in a different way or thinking about a problem from another perspective has let us make better decisions about how to route claims, what required attention and how to ensure an efficient, positive resolution.

Best AI Tech for P&C Personal Lines

Artificial intelligence technologies are everywhere. The great leap forward in AI over the past decade has come along with an explosion of new tech companies, AI deployment across almost every industry sector and AI capabilities behind the scenes in billions of intelligent devices around the world. What does all of this mean for the personal lines insurance sector? SMA answers this question in a new research report, “AI in P&C Personal Lines: Insurer Progress, Plans, and Predictions.”

The first step toward answering this question is to understand that AI is a family of related technologies, each with its own potential uses and insurance implications. The key technologies relevant for P&C insurance are machine learning, computer vision, robotic process automation, user interaction technologies, natural language processing and voice technologies. It’s a challenge to sort through all these technologies, the insurtech and incumbent providers that offer AI-based solutions and where each insurer will benefit most from applying AI.

The overall value rankings indicate that user interaction technologies fueled by AI are at the top of the list for personal lines insurers. Every insurer has activity underway, mostly by leveraging chatbots for interactions with policyholders and agents or using machine learning for guided data collection during the application process. Insurers see high potential for transformation in policy servicing, billing and claims – areas where routine interactions can be automated.

Robotic process automation is in broad use across personal lines, although the RPA technology is viewed by many as more tactical. There is high value related to streamlining operations and reducing costs, but most wouldn’t put it in the innovative category.

Machine learning and computer vision have great potential for personal lines in both underwriting and claims. The combination of computer vision and ML technologies applied to aerial imagery is already becoming a common way to provide property characteristics and risk scores for underwriting. Likewise, images from satellites, fixed-wing aircraft and drones are frequently used for NATCAT situations. And AI technologies will be increasingly applied to these images for response planning.

There are many other examples. But for the purposes of this blog, the main question – which technologies are most valuable – has been answered. AI-based user interface (UI) technologies, machine learning (ML) and computer vision demonstrate the best combination of high value today and transformation potential for the long term.

But perhaps the more important question is not which technologies are valuable, but rather where AI technologies are most valuable in the enterprise. The short answer is that there are so many potential value levers and so many unique aspects to different business areas and lines of business that it is difficult to select just a couple of high-value areas. That said, it is relatively apparent that underwriting and claims both present major opportunities, and activities are already underway there. There are great possibilities for AI in inspections, property underwriting, triage, fraud, CAT management, automated damage assessment, predictive reserving and other specific areas.

See also: Stop Being Scared of Artificial Intelligence

There is no shortage of opportunities for AI in personal lines. Fortunately, there are increasing numbers of tech solutions in the market and growing expertise in the industry involving AI technologies and how to apply them. Ultimately, we expect to see a pervasive use of AI technologies throughout the insurance industry. Some will become table stakes. Others will define the winners in the new era of insurance.  

AI in Commercial Underwriting

Today’s underwriters have more variables to contend with, more submissions, more competition and more data of all kinds to deal with than ever before. That’s why more and more insurance firms are deploying AI in commercial underwriting.

Machine learning (ML) and AI are incredibly well suited for helping to deal with the masses of data that underwriters now face. These technologies are changing underwriters’ working lives for the better and delivering huge benefits to businesses and the insurance industry as a whole.

In this article, we’ll explore five key ways you can implement AI and ML in the underwriting process and the results they can achieve. Without further ado, let’s get started.

1.  Processing underwriting submissions

Although efforts have been made to streamline submission processing, many lines of business in the insurance industry still have to deal with large volumes of documents that need to be processed manually. Until now, that’s just been part of the job — and a time-consuming, laborious one.

New applications of AI in commercial underwriting can give great assistance in extracting information from PDFs, printed documents, emails and even handwritten documents, reducing the amount of work underwriters need to do by hand. Optical character recognition and natural language processing are now sophisticated enough to identify the required data in a document, extract it and even perform a degree of evaluation. These advances in text extraction and analysis are opening up efficiencies in underwriting processes, expediting workloads that had previously been a burden to insurance professionals. Time saved on submissions processing is time gained for more rewarding work that makes better use of underwriters’ skills and helps to develop the business.

2. Making risk appetite decisions

As you know, reviewing submissions for viability is another task that can take up a lot of an underwriter’s time. Analyzing the submission and all the related risk data, making the decision whether to underwrite it – it all takes time and effort. And it’s another area where you can deploy AI in commercial underwriting to achieve great results.

Machine learning can now offer underwriters valuable assistance in the decision-making process. Using data on previous applications that have been approved or rejected, these systems build an understanding of which are likely to be viable and which aren’t. The systems can automatically decline certain activities described in the application as free-form text, if deemed too risky or otherwise unviable. Using text classification, these activity descriptions can be automatically mapped onto their corresponding industry codes, based on a given standard. If an application is found to be viable according to the system’s judgment, it can also recommend the most appropriate product according to your historical data. Once again, this valuable assistance can be a real asset for time-pressed underwriters.

3. Submission assignment and triage

Some underwriting submissions, in certain lines of business, require extra attention during processing. They need to be prioritized, but, unlike with other submissions, this can’t be done using simple, blanket rules such as their policy effective date. Underwriters need to look in greater depth to decide their priority.

Using AI in commercial underwriting can help here, too. Optimization and forecasting technologies can assist in assigning these submissions to the most appropriate underwriter. Predictive modeling can also rank submissions according to their estimated closing ratio or some other key performance indicator (KPI). For instance, AI could decide to rank one application highly because you’ve recently been successful at closing business with that broker. These innovations have a tangible impact on how well your business operates and your bottom line: Submissions are allocated more effectively, and your overall closing ratio improves.

See also: ‘3D Underwriting’ in Life Insurance

4. Evaluating risk profiles

To evaluate the risk involved in a submission, underwriters must often invest considerable time in research. They must research and weigh all kinds of information to properly evaluate these risk profiles. Sifting through the wealth of information available, in myriad formats, can be like searching for a needle in a haystack — until now.

Today’s intelligent tools can search through many types of structured (processed and labeled) data as well as raw, unstructured data and aggregate relevant information for underwriters to use. For instance, an underwriter may use this system to search through a database of property inspections, to compare similar cases of structural damage and their results. These systems also make it far easier to retrieve similar past applications to see patterns and learn from earlier experience. Now your business never has to make the same mistake twice.

As we said earlier, AI is the master of dealing with large volumes of complex data, so, when it comes to locating and surfacing valuable items of information like this, AI is in its element. The benefits for underwriters and businesses are huge here: They can be better informed and more confident in their risk evaluations.

5. Coverage recommendations

Toward the end of the underwriting submissions review process, it’s time to make a judgement: what coverages will be recommended? AI-powered systems are capable of assisting end-to-end, so they have much to offer at this point, too.

Recommender systems can help with coverage judgments. By analyzing previous applications, they can get a sense of what the appropriate coverages, with limits and deductibles, might be and offer suggestions the underwriters can use to make their final decision. On a business-wide scale, this means your product and coverage recommendations will be better aligned with clients’ needs and their risk profiles.

Ready to deploy AI in commercial underwriting?

All the use cases we’ve outlined here are available to businesses right now, so if you want to start deploying AI in the underwriting process, you can start obtaining the benefits without delay.

As the industry evolves in the coming years, we’re certain that AI will become an even more useful assistant to underwriters all over the world. And, as new applications of AI in commercial underwriting are developed, we look forward to telling you all about them.

This article was originally published here.