Tag Archives: machine learning

How to Deliver the ROI From AI

For insurance companies, there’s a constant influx of data from almost everywhere: customers, marketing teams, sales representatives, underwriting departments, HR and more. These massive amounts of data can be used to make your company better, or so you’ve been told. But harnessing business value from this data isn’t as easy as it might seem. It takes more than collecting data and building models for AI to help a business.

In the last few years, a technology has emerged that can harness AI across all departments of a business like never before, enabling massive, company-wide returns. However, the technology alone isn’t enough; there must be the right combination of technology, people and process.

Feature Stores for Machine Learning

Data scientists love to dive deep into different algorithm alternatives, but the most effective way to get better predictive signals is to get the right data. For example, in media personalization, companies often used the fact that a particular user visited a particular site (like a luxury shoe brand) as an important data point. But this is deceptive. Recency also matters. If a visit to a particular site has been within, say, the last 48 hours, you get significantly better conversion on ads. You have to get the right data points represented to get a model to perform!

Data points that inform models are known as features. These are usually transformed data attributes, which together form the feature vectors that are the input to machine learning algorithms. The process of turning raw data into features is called feature engineering, and is — in my opinion — the critical success factor for practical ML projects that deal with corporate structured data.

Not only is feature engineering essential for model accuracy, it’s also incredibly time-intensive for data scientists. Data preparation takes 80% of data scientists’ time, which means they only have 20% left to actually build, test and implement models. This makes it incredibly difficult and expensive to build models at the volume that would be necessary to provide value for every department of an insurance company.

Technology leaders like Uber, Google and Airbnb have spent years and millions of dollars designing infrastructure that makes it possible to unleash the power of AI throughout a company. The solution they have all converged on is a feature store.

A feature store is a central repository that stores features, data lineage and metadata associated with all the machine learning models in a company. In essence, it is a single source of truth for all of the data science work within one organization. Being able to share and re-use features boosts data science productivity by cutting down duplicate work and making it easy for data engineers, data scientists and ML engineers to collaborate. Each machine learning model becomes cheaper and easier to produce. (If you want to learn more about why that is, there’s a more in-depth resource here.)

See also: 6 Implications of Big Data for Insurance

Integrate Diverse Skill Sets in Data Science Teams

Even though feature stores are incredibly powerful tools, they are ultimately still tools, which means how they’re used will influence how helpful they are. Even with a feature store bridging the gaps inside a company, a “siloed” data science structure makes it hard to truly integrate AI into the enterprise.

Traditionally, the people who can manage large volumes of data and “do the math” of machine learning are sitting in their silos. They are away from the action — where the application interacts with customers, suppliers and employees. They are one step removed from the business. 

But the AI or data science team is not equipped to get the job done independently. They simply do not have enough knowledge about the business or the applications that will deploy the models to lead to production applications that deliver business outcomes. The secret sauce to a successful AI implementation is diversity. Data scientists need to work side by side with people who know the business and the application from inception to completion. 

Culture of ML Experimentation

Machine learning projects need to include more than just subject matter experts and application developers as part of the data science and data engineering teams. To do ML well, you have to create a culture of experimentation within your data science team. 

Markets change, bad actors innovate, the climate changes, the competitors change and so much more. What was the perfect feature vector on go-live might produce noise two months later, or worse — tomorrow. You must realize that an ML project will not thrive with a hands-off approach; it is a process of continuous experimentation and continuous improvement. So the secret is to keep the diverse team intact, frequently evaluating the deployed models, and able to experiment with new features.

See also: Insurance Outlook for 2021

Conclusion

The technologies and organizational silos of the past weren’t made to embed AI into the fabric of organizations, and as a result, companies that aren’t innovating aren’t benefiting from the full power of AI.
To inject AI throughout a company, the goal needs to be the continuous improvement of business outcomes.

You can achieve this by optimizing the two bottlenecks of the operational process:. First, overcome the feature bottleneck of the ML lifecycle with a feature store. Second, overcome the organizational bottleneck of the technology lifecycle by distributing data experts in every department of your company. Your teams will finally be able to demonstrate a significant ROI from your AI.

Case Study on Using AI in Workers’ Comp

Australia is home to a well-developed workers’ compensation system. Each state determines the design of its scheme, with some being privately underwritten by insurers and others being state-run. Claims across territories vary by industry, injury and complexity. As such, insurers need systems that can enable quality, efficient handling of claims to facilitate the health of injured parties and can get them back to work as quickly as possible.

Approximately three years ago, QBE’s Australia Pacific division, like many other insurers, was running what we would describe as a “process-compliant business” when it came to workers’ comp claims. Leadership wanted to do more to eliminate manual processes and take advantage of claims adjusters’ expertise to get the best result for customers and their employees. They knew technology was the key.

Three Core Issues

QBE had long valued the principle of getting the right claim to the right adjuster based on areas of expertise. But to spot complexities early, claims teams engaged in what I refer to as our manual triage system. Expert adjusters did a cursory look at claims as soon as they were lodged, to identify potential risks based on very simple criteria — in particular, was the employee missing work? Simply put, we needed a better way to get claims routed and assessed from the earliest stages.

Our leadership team also wanted to figure out how to lighten adjuster caseload. As is common across the industry, adjusters may handle as many as 70 to 80 claims at a time. With this volume, it was incredibly difficult to spot the more complex or problematic claims, the ones that require the most attention. QBE was seeking a tool that could surface this information quickly and easily.

Additionally, the team was committed to identifying a better way to conduct quality reviews. Instead of manually selecting which claims to examine, which is very time-consuming, we wanted to add artificial intelligence to the mix.

AI Intrigue

As QBE prepared to set its strategic initiatives for the next few years, data analytics was prioritized. With more detailed information, adjusters and leadership could make better decisions about how to route claims, what required attention and how to ensure efficient, positive resolution.

We considered building a solution in-house but quickly realized that it would take a considerable amount of time and staff resources to construct a system that mapped to our priorities. We started engaging with many of the big data and analytics consultancies, hopeful that they would be able to help. They didn’t fit the bill, either.

See also: COVID-19’s Impact on Delivery of Care

In the summer of 2017, I ran across an article about how CLARA Analytics applied machine learning to workers’ comp claims. The approach, which leveraged artificial intelligence (AI) to identify claim issues and keep them from escalating while helping to close simple claims faster, made sense. As I examined how the models worked and how the software visualizes workload allocation, I recognized that it was the way we wanted to run our business and that CLARA had a sizeable lead over what QBE could build internally.

Clear Benefits

Once we started to get past people’s reluctance to use AI, they began to understand how an AI system could make their jobs easier — the models not only saved countless hours of manual work but their accuracy made decision-making significantly easier.

The financial benefits associated with an adoption of such software have been significant. The initial reports estimate that product integration will easily deliver a 5:1 return on investment, and that could turn out to be conservative, given that the savings will extend across QBE’s entire workers’ comp portfolio.

QBE has been able to implement a more focused approach to quality assurance. Gone are the random selections of claims. Instead, we take the lead from this new system, which provides a much higher level of confidence that the review team is looking into the claims that need it most.

We believe that quality assurance shouldn’t be driven by art; it should be driven by analytics, which is exactly what we’ve been able to accomplish.

In addition to the new-found efficiencies and claim insights, we have enjoyed the competitive differentiation provided to our sales team. They love being able to showcase how QBE uses industry-leading technology to improve claims operations at multiple levels.

See also: An AI Road Map to the Future of Insurance

Continuing Collaboration

Our partnership has allowed us to enhance the software’s capabilities to create significant advancements for our industry. For example, several months ago, both QBE and CLARA started collecting perception data from each injured person’s claim, such as how they feel about their recovery. Today, we are able to collect and analyze that information at scale.

People have been talking about psychosocial flags for injury recovery for more than 20 years, and no one has solved the problem. But taking in extra data points and using them in a different way or thinking about a problem from another perspective has let us make better decisions about how to route claims, what required attention and how to ensure an efficient, positive resolution.

Best AI Tech for P&C Personal Lines

Artificial intelligence technologies are everywhere. The great leap forward in AI over the past decade has come along with an explosion of new tech companies, AI deployment across almost every industry sector and AI capabilities behind the scenes in billions of intelligent devices around the world. What does all of this mean for the personal lines insurance sector? SMA answers this question in a new research report, “AI in P&C Personal Lines: Insurer Progress, Plans, and Predictions.”

The first step toward answering this question is to understand that AI is a family of related technologies, each with its own potential uses and insurance implications. The key technologies relevant for P&C insurance are machine learning, computer vision, robotic process automation, user interaction technologies, natural language processing and voice technologies. It’s a challenge to sort through all these technologies, the insurtech and incumbent providers that offer AI-based solutions and where each insurer will benefit most from applying AI.

The overall value rankings indicate that user interaction technologies fueled by AI are at the top of the list for personal lines insurers. Every insurer has activity underway, mostly by leveraging chatbots for interactions with policyholders and agents or using machine learning for guided data collection during the application process. Insurers see high potential for transformation in policy servicing, billing and claims – areas where routine interactions can be automated.

Robotic process automation is in broad use across personal lines, although the RPA technology is viewed by many as more tactical. There is high value related to streamlining operations and reducing costs, but most wouldn’t put it in the innovative category.

Machine learning and computer vision have great potential for personal lines in both underwriting and claims. The combination of computer vision and ML technologies applied to aerial imagery is already becoming a common way to provide property characteristics and risk scores for underwriting. Likewise, images from satellites, fixed-wing aircraft and drones are frequently used for NATCAT situations. And AI technologies will be increasingly applied to these images for response planning.

There are many other examples. But for the purposes of this blog, the main question – which technologies are most valuable – has been answered. AI-based user interface (UI) technologies, machine learning (ML) and computer vision demonstrate the best combination of high value today and transformation potential for the long term.

But perhaps the more important question is not which technologies are valuable, but rather where AI technologies are most valuable in the enterprise. The short answer is that there are so many potential value levers and so many unique aspects to different business areas and lines of business that it is difficult to select just a couple of high-value areas. That said, it is relatively apparent that underwriting and claims both present major opportunities, and activities are already underway there. There are great possibilities for AI in inspections, property underwriting, triage, fraud, CAT management, automated damage assessment, predictive reserving and other specific areas.

See also: Stop Being Scared of Artificial Intelligence

There is no shortage of opportunities for AI in personal lines. Fortunately, there are increasing numbers of tech solutions in the market and growing expertise in the industry involving AI technologies and how to apply them. Ultimately, we expect to see a pervasive use of AI technologies throughout the insurance industry. Some will become table stakes. Others will define the winners in the new era of insurance.  

AI in Commercial Underwriting

Today’s underwriters have more variables to contend with, more submissions, more competition and more data of all kinds to deal with than ever before. That’s why more and more insurance firms are deploying AI in commercial underwriting.

Machine learning (ML) and AI are incredibly well suited for helping to deal with the masses of data that underwriters now face. These technologies are changing underwriters’ working lives for the better and delivering huge benefits to businesses and the insurance industry as a whole.

In this article, we’ll explore five key ways you can implement AI and ML in the underwriting process and the results they can achieve. Without further ado, let’s get started.

1.  Processing underwriting submissions

Although efforts have been made to streamline submission processing, many lines of business in the insurance industry still have to deal with large volumes of documents that need to be processed manually. Until now, that’s just been part of the job — and a time-consuming, laborious one.

New applications of AI in commercial underwriting can give great assistance in extracting information from PDFs, printed documents, emails and even handwritten documents, reducing the amount of work underwriters need to do by hand. Optical character recognition and natural language processing are now sophisticated enough to identify the required data in a document, extract it and even perform a degree of evaluation. These advances in text extraction and analysis are opening up efficiencies in underwriting processes, expediting workloads that had previously been a burden to insurance professionals. Time saved on submissions processing is time gained for more rewarding work that makes better use of underwriters’ skills and helps to develop the business.

2. Making risk appetite decisions

As you know, reviewing submissions for viability is another task that can take up a lot of an underwriter’s time. Analyzing the submission and all the related risk data, making the decision whether to underwrite it – it all takes time and effort. And it’s another area where you can deploy AI in commercial underwriting to achieve great results.

Machine learning can now offer underwriters valuable assistance in the decision-making process. Using data on previous applications that have been approved or rejected, these systems build an understanding of which are likely to be viable and which aren’t. The systems can automatically decline certain activities described in the application as free-form text, if deemed too risky or otherwise unviable. Using text classification, these activity descriptions can be automatically mapped onto their corresponding industry codes, based on a given standard. If an application is found to be viable according to the system’s judgment, it can also recommend the most appropriate product according to your historical data. Once again, this valuable assistance can be a real asset for time-pressed underwriters.

3. Submission assignment and triage

Some underwriting submissions, in certain lines of business, require extra attention during processing. They need to be prioritized, but, unlike with other submissions, this can’t be done using simple, blanket rules such as their policy effective date. Underwriters need to look in greater depth to decide their priority.

Using AI in commercial underwriting can help here, too. Optimization and forecasting technologies can assist in assigning these submissions to the most appropriate underwriter. Predictive modeling can also rank submissions according to their estimated closing ratio or some other key performance indicator (KPI). For instance, AI could decide to rank one application highly because you’ve recently been successful at closing business with that broker. These innovations have a tangible impact on how well your business operates and your bottom line: Submissions are allocated more effectively, and your overall closing ratio improves.

See also: ‘3D Underwriting’ in Life Insurance

4. Evaluating risk profiles

To evaluate the risk involved in a submission, underwriters must often invest considerable time in research. They must research and weigh all kinds of information to properly evaluate these risk profiles. Sifting through the wealth of information available, in myriad formats, can be like searching for a needle in a haystack — until now.

Today’s intelligent tools can search through many types of structured (processed and labeled) data as well as raw, unstructured data and aggregate relevant information for underwriters to use. For instance, an underwriter may use this system to search through a database of property inspections, to compare similar cases of structural damage and their results. These systems also make it far easier to retrieve similar past applications to see patterns and learn from earlier experience. Now your business never has to make the same mistake twice.

As we said earlier, AI is the master of dealing with large volumes of complex data, so, when it comes to locating and surfacing valuable items of information like this, AI is in its element. The benefits for underwriters and businesses are huge here: They can be better informed and more confident in their risk evaluations.

5. Coverage recommendations

Toward the end of the underwriting submissions review process, it’s time to make a judgement: what coverages will be recommended? AI-powered systems are capable of assisting end-to-end, so they have much to offer at this point, too.

Recommender systems can help with coverage judgments. By analyzing previous applications, they can get a sense of what the appropriate coverages, with limits and deductibles, might be and offer suggestions the underwriters can use to make their final decision. On a business-wide scale, this means your product and coverage recommendations will be better aligned with clients’ needs and their risk profiles.

Ready to deploy AI in commercial underwriting?

All the use cases we’ve outlined here are available to businesses right now, so if you want to start deploying AI in the underwriting process, you can start obtaining the benefits without delay.

As the industry evolves in the coming years, we’re certain that AI will become an even more useful assistant to underwriters all over the world. And, as new applications of AI in commercial underwriting are developed, we look forward to telling you all about them.

This article was originally published here.

3 Big Opportunities From AI and ML

In 2020, we find ourselves living in a world that demands a real-time shopping experience. Brands like Amazon make this experience as easy as possible by providing the option to compare one product against another product(s). The comparisons include price, features and the length of time it will take for the product to arrive. Furthermore, we can see recommended products based on buying behavior patterns, as well as related products that can be purchased to maximize the overall value. Each of these factors weigh into how, when and from whom we purchase.

Behind the scenes of Amazon’s user experience are two key technologies driving innovation: artificial intelligence (AI) and machine learning (ML). These terms are not often tossed around when referring to the current group insurance shopping experience, although there is certainly much room for carriers to integrate these innovations to their benefit. The McKinsey Global Institute reports that up to 60% of insurance sales and distribution tasks could be automated, as well as up to 35% of underwriting tasks.

Herein lie three major machine learning opportunities to unlock a better user experience for all stakeholders in the purchasing process, from sales representatives and underwriters to brokers, employers and employees. 

1. Automating Broker Emails and Required Quoting Documents

Imagine if Amazon required you to email a request every time you wanted to purchase a product, without knowing when the product would arrive, how much it would cost or whether it would even be shipped at all until three to five days after sending the original email. 

In many cases, this is the experience today for brokers who email a request for proposal (RFP) to a group insurance carrier. And so we arrive at our first opportunity for machine learning; speeding up the quote turnaround time (TAT) by automating the setup of broker emails and documents required to quote. As we peel back the onion to see how most life and disability and worksite group carriers receive and process quote requests today, it is clear how manual the current process is. This process often entails inputting data twice; once in a CRM such as Salesforce, and a second time in a quoting and underwriting engine, or spreadsheet on macro steroids.

Much of this process can be automated by leveraging machine learning to train a model that runs through thousands of previous broker email RFPs to understand broker requests, the differences between brokers and what information is required to quote the desired products. Oftentimes, brokers do not provide all the information necessary for quoting, which today is handled by placing the group “on hold.” The RFP intake specialist then has to manually email the broker back and ask for the missing information to proceed with the quote request. Machine learning can help to quickly identify what is missing, and automatically reply to the broker requesting this information and drive to completion.

See also: COVID: How Carriers Can Recover

2. Automating Plan Design(s) to Quote

Many times the RFP includes a current coverage contract or booklet that could be anywhere from 30 to 50 pages. This document contains all the clues as to which plan design should be quoted to compete with the carrier currently in force. The foundational plan design to quote starts with matching up the exact benefits for each product line and, you guessed it, going line-by-line through that 50-page contract booklet to manually hand-stitch a plan design to quote. As you can imagine, this is not the most efficient experience for the RFP intake specialist, nor the broker who ends up receiving a quote riddled with manual errors and plans that do not match up with the customer’s current coverage.

In this case, a machine learning model can be trained to extract all the plan design elements from any incoming file that contains current coverage details. This ML model would be able to decipher the current carrier’s format structures and benefit naming conventions, and subsequently translate them into the quoting carrier’s structure. Of course, there are instances in which a customer’s current plan design is not able to be quoted, sold and administered. In this case, a machine learning model would be able to flag any benefits that aren’t able to be translated and accounted for. To get the maximum value, this use case assumes an API integration with a quoting engine to automate plans to quote.

3. Analyzing Closed-Won and Closed-Lost Proposals 

At the moment, once a case has been either sold or lost, most carriers are not harnessing the true power of the resulting data (i.e. the insights and components required to make a winning proposal.) Carriers tend to look more closely at closed-won proposals because they have to use this data to implement policies and sold rates. But even here, the data currently being captured and tracked leaves much room for improvement.

Machine learning and AI models can be used here to better analyze which RFPs are the most likely to win based on a variety of factors. For example, an ML model could track the current carriers and rates on incoming RFPs and gather won/lost data once the sale has closed. This data can be used to inform which future RFPs are most likely to win based on the customer’s current carrier.

On the flip side, closed-lost proposal data (that now typically ends up in an abyss far from any BI visualization tools) could be used to show key factors as to why the case was lost. A national life and disability carrier focused on the small group sector may have around 100,000 RFPs a year. If the close ratio is 9%, that means 91,000 proposals were lost. These thousands of proposals could be fed into a machine learning model to analyze their ingredients, in the hopes of adjusting the sales recipe to increase future close ratios.

A More Profitable Future  

Opportunities for ML and AI implementation within the group industry are evident, and these use cases will ultimately enhance the user experience as well as service policies, manage billing, process claims and handle renewals. 46% of AI vendors in insurance offer solutions for claims, and 43% have solutions for underwriting; the solutions have been far more widely used within the home and auto industry than in the group insurance sector. One important part of this approach is to identify where the “lowest hanging fruit” use cases exist, which can be implemented in a proof-of-concept fashion.

See also: How Machine Learning Halts Data Breaches

The implementations can either be achieved with internal teams or by working with insurtech partner solutions. The first and second ML opportunities outlined both exist within the RFP intake process, which can provide direct operating savings ROI, whereas the third may take longer to actualize as close ratios gradually increase. To move toward a more profitable future, it is essential that group carriers notice and take full advantage of the advancements being made in machine learning technology today.