Tag Archives: data

COVID-19 and Need for Analytical Insurers

While what we see as the fundamentals and benefits of becoming an “analytical insurer” haven’t changed, being one is even more important now because of COVID-19 and its far-reaching economic impacts.

Defining the “analytical insurer”

When talking about analytical insurers, we are first referring to companies that have embedded three key characteristics in their business: a reliance on data and an intolerance of anecdotes in making decisions; the effective compilation of data to present a single source of the facts; and the ability of all decision makers to access granular insight at the point of making a decision. From those foundations, some insurers are moving on to invest in areas that we group under three umbrella sets of capabilities:

  • Active portfolio management, and specifically scenario modeling
  • Intelligent intervention
  • Digital enabled distribution

The incentives for pursuing these attributes nearly always boils down to a handful of drivers – greater agility, rapid speed to market and accuracy of decision making, all delivered at lower cost. The insurers are reducing the analyze-decide-deploy cycle of decision making from weeks and months to days, or hours in some cases – resulting in stronger market positioning, more competitive pricing, slicker operations, increased confidence, cost reductions and a much-improved ability to adapt to changing markets.

As more companies have been persuaded to invest in the benefits over recent years, competition has continued to fuel an analytics arms race. The exceptional economic and market circumstances that COVID-19 is creating only seem likely to raise the stakes, given the likely continuing impact on premiums, business mix, profitability, resources and working practices, not to mention customer experiences that may never revert fully back to their pre-pandemic nature.

The COVID-19 effect: Consider the dilemma facing hospitality or commercial property insurers. An insurer’s hospitality clients are essentially economically inactive, with the prospect that some will never recover. At the other extreme, some manufacturing plants are working flat out in ways that were never anticipated, potentially raising the risk of things like electrical fires or accidents involving tired employees. Understanding the change in both exposure and underlying risk of a given situation is vital at both case and portfolio level. Being able to scenario model differing lockdown and economic outcomes is key to successfully navigating the post-COVID risk landscape.

That’s not to say that COVID-19 is a signal for kneejerk reactions from insurers. Importantly, responding to the short-term pressures and realities that the virus brings to insurers can be compatible with longer-term ambitions linked to agility and pace of operations. For example, enhancing understanding of your portfolio is going to be just as important to insurers’ longer-term fortunes as it is in the short term, and the same applies to most aspects of capitalizing on the opportunities to build from a stronger analytical base.

Here are a few thoughts on how stronger analytics can assist insurers through the COVID-19 crisis, but also create building blocks for longer-term business benefits:

Active portfolio management and scenario modeling 

Going back to our hospitality and manufacturing examples, the uncertainty of COVID-19 and the potential new normal it will create could potentially decimate some portfolios and the basis on which they’re priced. 

More granular policy information makes ground-up scenario building possible, putting some meaningful number ranges on observed and anticipated trends, and teeing up a whole range of things, such as evaluating what portfolios will suffer most, or even disappear. 

See also: How Coronavirus Is Cutting Connections

The recent work we’ve been doing with the Lloyd’s and London market on active portfolio management demonstrates, however, that this is anything but a COVID-19-related issue; the issue is widely seen as critical to longer-term performance and profitability.

Equally, the COVID outbreak has vividly highlighted the opportunity to derive benefits from modeling more widely – say, moving from claims cost to more sector-based analysis using rich exposure data within pricing systems to look at what companies want to do and need to do in their portfolio mix. 

The ability to rapidly test hypotheses, and deliver against options, and then monitor and change tack if necessary has already become a backbone of dynamic pricing in personal lines. Real-time scenario modeling can be a similar enabler for underwriting, pricing and claims professionals in the commercial, life and health sectors.

Intelligent intervention

Whether it’s in underwriting or claims, the objective of intelligent intervention should be to deploy the right resources to the situation at hand. This could mean completely automating a process that is relatively straightforward or using experienced teams where complex judgment is needed. Whether adopting a low-touch, volume approach driven from portfolio data or making sure subject matter experts have the right insight available at the right time to make an informed decision, insurers’ data assets make this possible. 

The intelligence comes from deploying a more granular approach and, where appropriate, predictive models to support routing and evaluation decisions. Using large loss propensity models to optimize survey and risk appetite decisions and using conversion data insight to prioritize underwriting activity are simple examples of this. 

From an automation point of view, it could be about adding granularity to feed a company’s level of automatic underwriting appetite and claims handling. Some insurers use relatively simple decision rules, such as that they’ll automate a risk if it has fewer than 10 employees, or if a claim is of a certain value. Adding additional decision layers (e.g. trade, geography, portfolio context, trust indices, etc.) refines the decision process and allows the safe expansion of automated approaches and lowers costs. At the same time, you get the most from your underwriters and claims experts by allowing them to use their expertise and add value in more complex, individual cases.

The ability to flex the mix between technology and human input is also highly desirable. For example, if a pandemic were to affect a significant proportion of the team, it would be possible to expand the automated or self-service footprint to bridge the gap. Such flexibility can also provide short- or long-term help in areas such as product simplification and cost management.

Digitally enabled distribution

One thing COVID-19 has done is shine a light on organisztions that are better or worse at interacting digitally with concerned customers. In the process, digital capability has become more a matter of reputation as well as a factor in general cost of doing business and customer experience.

Yet, the digital component is only the tip of the iceberg. Below that there are a lot of hidden but hard-working data assets, supporting applications such as products broken into components, the ability to manage channel conflict and active management of cross subsidies, not to mention addressing the widespread challenges of integrating legacy platforms. 

The benefits of getting the beneath the waterline on digital infrastructure right are already considerable, and growing outside the personal lines market when Lloyd’s is creating its digital trading platform, when self-service claims operations are making steady inroads, when initiatives are underway to allow brokers to simplify the binding process and when new digital distribution opportunities, perhaps where insurance is part of something else, increase.

See also: COVID-19: Technology, Investment, Innovation

Building for the future

At present, it is hard to understand the implications of the new normal, but foundational analytics capabilities should help insurers to not only better navigate that uncertainty but leave them better-equipped for the longer-term fallout and continuing market transition. As part of an insurance future that will inevitably demand more operational flexibility and nimbleness, with digital platforms coming more to the fore, data and analytics and the wherewithal to use them effectively will mark out analytical insurers from the crowd.

Overcoming Human Biases via Data

Managing business risk is a tricky thing. With an appetite too small, opportunity could be lost, but taking on too much risk could hurt profit and performance. 

Companies that are not thinking about risk are at risk!

Making the move to proactive risk management requires a culture shift, but 65% of organizations say they’re still operating with “reactive” or “basic” risk management response. Mature companies often take a strategic and calculated approach to risk management. Considering that risk = probability of occurrence x severity or consequence, mathematic analyses can help organizations avoid preventable pitfalls. Risk modeling using advanced statistical techniques has developed to align theoretical risk with real-world events and provides C-suite decision makers with quantifiable support needed to make data-informed decisions.

A Five-Step Approach to Data-Driven Risk Management

Where do smart companies start when they want to begin addressing risk? Data. 

To understand risk beyond “gut feelings” and anecdotal evidence, companies need to leverage the information that is available to them – especially in today’s data-saturated environment. These five steps can outline your path to data-informed risk management.

  • Step 1: Collect your data. Often the most difficult step, identifying the right data to inform your analysis, is critical. We all know that “data is out there,” but not all data is created equal. For best results, explore different dataset options, take the time to understand how this data was collected and then clean data to ensure any risk analysis is both relevant and actionable.
  • Step 2: Develop a risk model. Risk modeling allows teams to include contextually relevant predictors and relationships. If historical data exists for current risks, create an empirical model to articulate key predictors. If analysis focuses on emerging risks where no data exists, craft a theoretical risk model based on the relationships you do know.
  • Step 3: Explore differing scenarios. There are probably a few risk scenarios that keep you up at night. Use your model to understand the likelihood and loss of these potential events. Estimate losses for each scenario in a metric that’s meaningful to your audience. Money? Time? Human capital?
  • Step 4: Share your findings. Now it’s time to tell your story. This is where data geeks sometimes “lose their audience.” Your analysis is ineffective if decision makers do not understand the implications.  Share your findings in a way that is meaningful using relevant metrics, data visualization and scenario storytelling. In practice, this means avoiding abstract metrics in favor of direct impacts — such as potential revenue loss or downtime — and possibly using infographics to support cause and effect narratives. Connect the dots between risk and results with a relevant story that ends with actionable advice.
  • Step 5: Enable action. As Theodore Roosevelt once implied, sharing a problem without proposing a solution is called whining. Once you’ve presented your model and your findings, you will likely have an understanding of the leading risk factors. Let these factors inform your recommendations for risk mitigation. This will help decision makers prioritize their resources for maximum impact. 

Sometimes, data isn’t enough

Not surprisingly however, data isn’t always enough to instigate change. As anyone who’s listened to the news lately knows, data can be manipulated and interpreted in different ways. Sometimes, we see what we want to see – it’s in our psychology – and the C-suite is not immune to this. To be human is to be biased. 

Therefore, communicating risk with data is a strong technique for neutralizing the effects of human biases, but one should be aware of common predispositions that often arise when people assess risk.  

See also: Claims and Effective Risk Management

To Be Human Is to Be Biased

The famous psychologist Daniel Kahneman highlighted the fallibility of human cognition in his work to discover inherent human biases. These biases evolved over millennia as coping mechanisms for the complex world around us, but today they sometimes impede our ability to reason. The challenge is that many of us are not aware of these biases and therefore unknowingly fall victim to their influence.

“We can be blind to the obvious, and we are also blind to our blindness.” – Daniel Kahneman 

There are a few important biases to be aware of when presenting your risk analysis and recommendations. 

  • Conservatism bias: People are comfortable with what they know, and we show preference toward existing information over new data. As a result, if new data emerges suggesting increased risk, an audience may resist this new information simply because it’s new. 
  • The ostrich effect: No one likes bad news. When it comes to risk, people tend to ignore dangerous or negative information by “burying” their heads in the sand like an ostrich. But just ignoring the data doesn’t make the risk go away. A strong culture of risk management will help negate this effect. 
  • Survivorship bias: Biases can work toward unsupported risk tolerance, as well. With survivorship bias, people only focus on “surviving” events and ignore non-surviving events (or those events that did not actually occur). For instance, a company’s safety data may show a lack of head injuries (surviving event), and decision makers may believe there is no need for hard hats. 

Communicating risk with data is an excellent start toward shifting your work culture to one of predictive risk management, but we cannot forget the human element. As you share your models, data and findings, remember to address potential biases of your audience… even if your audience is unaware of their own human susceptibility!

Context Is Key to Unlocking LTC Data

Long-term care (LTC) insurance is no stranger to large amounts of data. However, in my 10-plus years in an LTC claim operations role, there is a piece of data I’m surprised continues to be shared without the proper context – claim terminations for people labeled “recovered.” Across the industry, this piece of data is used in actuarial assumptions and operational processes — but not just for claims where the insured has recovered.

Before I explain further, a little background:

Claims data is a crucial piece of the overall risk management puzzle, especially for LTC insurers. The reserves associated with future claims represent a huge amount of the liability they are holding separate. Claim termination rates are closely watched.

Insurers generally have three main designations for terminations for closed claims:

1. Death

This one is pretty easy to understand; the insureds stopped receiving benefits because they are now deceased. This occurs 73% of the time based on the recent study conducted by the American Association of Long Term Care Insurance.

2. Exhaustion of Benefits

Again, another simple concept. The insureds ran out of benefits before they died. This occurs 14% of the time, according to the AALTCI study.

3. Recovery

Here is where we find the complexity. The very nature of the word implies the claimant in this category is now healthier and no longer needs to receive benefits. According to the same study, this occurs 14% of the time.

See also: Using Data to Improve Long-Term Care  

The problem with this category lies not with the study, which accurately reflects what insurers report, but rather the context and consistency of how this data is classified. What’s suggested is not quite the reality. But it requires a little digging to understand what I mean.

Now for the Context

Insurers and the claim administration systems they use require their data be categorized into larger buckets. It’s much easier, after all, to analyze and predict variables when there are fewer varieties of those variables. Instead of having many claim termination reasons, let’s find a way to just have three. Sounds simpler, right?

Unfortunately, this approach changes the recovery designation into more of an “Other” category. Any claim that is closed where the insured isn’t deceased and still has benefits remaining ends up in this classification. Some examples:

Preservation of Benefits

Some insureds have limited benefits (and thus can run out of them). These claimants tend to be in their 60s, 70s and lower 80s. Given they’ll potentially fall short of benefits, they sometimes choose to stop receiving benefits to save them for future needs.

Respite Care

Most policies allow for several weeks of respite care per year. This benefit is independent of the elimination period and allows families to open a claim for a short time while the primary caregiver takes a much deserved break. Again, when these short claims close, they are coded as recovery.

Moving Abroad

Many policies do not cover care received outside the country. So, when insureds move overseas at the end of their life, the claims unfortunately must be closed, and their policy then lapses by their choice.

Spouse Retires/Family Member Becomes Caregiver

This one is close to the preservation of benefits status. Some policies exclude family members from providing the care. When the claim is initially filed, the spouse is still working or family members are unavailable to assist. These factors can change and cause a family to close the claim while the family member is able to care for the insured and save the rest of the money for later.

Lack of Contact

As odd as it sounds, sometimes claimants just stop sending in bills. The company attempts to contact them over several months, they search online databases for proof of their passing and they contact every phone number and e-mail address they have in connection to the claimant. At a certain point, they have to stop trying and close the claim.

Unreported Death

Related to lack of contact are deaths that are not reported to the insurance company and don’t get picked up by the search techniques most insurers use. Even if the companies later find out that the insured passed away and close the policy as a death, they generally don’t go back and change the termination status of the claim, so it remains, a recovery.

Less than $100 left on the policy

This one adds a final bit of humor to the list. The benefits available on an LTC policy are often not used in the exact amounts intended, so the policy is not exactly exhausted by the final benefit payment. I have seen situations where the amount left on the policy is so small, the insured (or the family) doesn’t send an invoice to request the final amount.

All of these examples have something in common. The claimant didn’t die, and there were benefits remaining on the policies. So every one of these situations would be reported as a recovery.

So what?

So what am I trying to say here? All data is inaccurate? No, the data isn’t inaccurate, it just requires the proper context before it is used for analysis. Without the proper context, statistics could be used to suggest that, 14% of the time, an insured who qualifies for long-term care benefits will improve enough to regain independence and no longer require assistance.

See also: Time for a ‘Nudge’ on Long-Term Care  

The reality is much harder to know. While you would expect some recovery on acute conditions (think hip replacements), would it surprise you to know that as many as 25% of these recoveries are claims where the insured has been certified with cognitive impairment? Did those claimants really get better and no longer require care? Another 25% of the recoveries most likely fall into one of the categories above. So that means about half the recoveries reported, aren’t really recoveries.

Recommendations:

  • Talk to your internal claims team to get their input. Involve them in the collection and analysis phases, not just at the read-out of the final product. By working together with some of the key claims experts, you will gain better context around the data.
  • Understand your internal processes and procedures. Learn the details of your company’s processes associated with opening, approving, paying and closing claims.
  • Be careful when using industry-wide data. Not every company’s processes are the same, and data elements may have different definitions. Only rely on and draw conclusions when you understand the contextual factors surrounding the data.

Turning Data Into Action

Over the past decade, insurers have focused heavily on improving the customer’s journey. This task can be particularly challenging because a customer’s engagement with them could be as little as one annual wellness visit with no other claims for that year.

In an effort to create engagement and build loyalty while working toward better health status, insurers have gamified biometric device interactions, launched semi-automated communications platforms and established group wellness challenges for employer groups and individual coverage plans.

But here’s the challenge: If the data gathered from these engagements that is fed back to insurers is not clean, readable and available in the format and time in which it is needed, then a carrier is unable to optimize its application. If this challenge can be solved, high-quality data that does meet those parameters can be used for CRM modeling tools, experience and loyalty measuring systems, enhanced communications applications, cross-sell offers and lifetime customer value formulas.

So how does one begin to solve this challenge? The key lies in using information obtained from reputable sources to fill in some of the gaps in the data you are already gathering.

See also: How Agencies Can Use Data Far Better  

Here are some of the benefits of using third-party data to inform your analytics:

  1. You can enhance the bland data you already have. You could fill volumes with the amount of information you have about your customers’ basic demographics such as age, geography and household income. But what about their risk for certain health conditions and their history of disease? Including these details can support better communications, closer engagement and efficient transaction processing with care providers and administrative systems managers.
  2. You can improve both the quantity and quality of your data. Quality of data can make or break processing and downstream analytics. When you use a third party to obtain your data, you may experience a more reliable return on investment in your marketing and communications spend. You can also make more informed decisions when you are pricing the risk of catastrophic losses. High-quality data can mean the difference between automated workflow decision making or manual and costly processes. It does not have to be a lot of data — but it does have to be clean, understandable, reliable and available when needed.
  3. You can diversify ways of turning data into actionable insights. Information might be engineered or derived from big datasets that are curated in a way that a payer can ingest, making it useful for activities including workflow automation, risk management assessments, price modeling exercises, population health management or sales and marketing activities.

Of course, it’s important to be able to efficiently manage data from multiple sources. To do that, you need to create a master data management plan. Often, a centralized location for several datasets makes sense, although a connected, decentralized arrangement can work, as well. Establish a standard data dictionary within your company to ensure that your staff understands external data in the right way and can more precisely define even internal data. In other words, break down data silos and functional barriers that may be preventing a standard dictionary that all can leverage.

How can you determine whether you are getting the most out of your use of data? A three-step approach may be helpful:

  1. Evaluate the data you have and verify whether it is clean, reliable and accessible in the manner you need it.
  2. Identify the areas in which external data could complement your own and structure a data management approach for all of your data — both internal and external.
  3. Establish a cross-functional executive team that can prioritize where you need the data most, and start on one initiative now. If you are not doing something, your competitors most probably are.

See also: Role of Unstructured Data in AI  

Well-organized data can help you engage your current customers, attract new customers and ultimately improve your company’s bottom line. But too much data, that is not optimized for your business needs, may not help the organization meet its goals. When you focus on high-quality and reliable data, you can see some tangible results when you adapt its use into platforms all along the lifecycle of your business.

Understanding New Generations of Data

To effectively acquire customers, offer personalized products and provide seamless service requires careful analysis of data from which insights can be drawn. Yet executives cite data quality (or lack thereof) as the chief challenge to their effective use of analytics. (Insurance Nexus’ Advanced Analytics and AI survey).

This may, in part, be due to the evolving nature of data and our understanding of how its changing qualities affect how we use it — as technology changes and different data sources emerge, the characteristics of data evolve.

More data is all well and good, but more isn’t simply…more. As new and more contextual streams of data have become available to insurance organizations, more robust and potent analytical insights can be drawn, carrying with them huge implications for insurance as a whole.

See also: Data, Analytics and the Next Generation of Underwriting  

Insurance Nexus spoke to three insurance data experts, Aviad Pinkovezky (head of product, Hippo Insurance), Jerry Gupta (director of group strategy, Swiss Re Management (US)) and Eugene Wen (vice president, group advanced analytics, Manulife), for their perspectives on what each generation of data means for the insurance organization of today, and how subsequent generations will affect the industry tomorrow.

See full whitepaper here.

While there is disagreement regarding which generational bucket data should fall into, current categorizations appear to be largely aligned. Internal, proprietary data is generally agreed to form first-generation data, with the second-generation comprising telematics and tracking device data. There is some contention over the categorization of third-party data, but these are largely academic distinctions.

Experts agree that we are witnessing the arrival of a new classification of data: third-generation. As Internet of Things (IoT) data becomes more commonplace, its incorporation with structured and unstructured data from social media, connected devices, web and mobile will constitute a potentially far more insightful kind of data.

While this is certainly on the horizon, and has been successfully deployed with vehicular telematics, using “IoT, including wearables, in the personal lines space [and elsewhere], is still not widely adopted,” says Jerry Gupta, senior vice president, digital catalyst, Swiss Re. Yet, he is confident that third-generation data will “be the next wave of really big data that we will see. Wearables will have a particular relevance to life and health products as one could collect lot of health-related data.”

Download the full whitepaper to get more insights.

Despite this promise, there are significant roadblocks to effectively leverage third-generation data. According to Aviad Pinkovezky, head of product at Hippo Insurance, the chief problem is one of vastly increased complexity: “This sort of data is created on demand and is based on the analysis of millions of different data points…algorithms aren’t just generating more data streams, they are taking new data, making decisions and applying them.” Clearly, this requires a change in how data is handled, stored and analyzed. Most significantly, third-generation data has the potential to change the nature of insurance.

See also: 10 Trends on Big Data, Advanced Analytics  

Given that data is no longer the limiting factor for insurance organizations, our research suggested five areas on which insurance carriers should focus to turn data into real-time, data-driven segmentation and personalization: cost, technical ability, compliance, legacy systems and strategic vision.

A challenge, certainly, but the potential rewards to both insurance carrier and insureds are hugely promising, especially the change in relationship between carrier and insured. The potential to not only predict, but mitigate, risk has huge implications for insurance.

Efficient, accurate and automated data gathering is a clear benefit for insurance carriers, and the potential to provide value-added services (by mitigating risk altogether) greatly enhances their role in the eyes of the customer. Measures that reduce risk to the insured increase trust and strengthen the bond between the carrier and the insured. Customers are less likely to view insurance as a service they hope to never use but, rather, a valuable partner in keeping themselves secure, both materially and financially.

The whitepaper, “Building the Customer-Focused Carrier of the Future with Next-Generation Data,” was created in association with Insurance Nexus’ sixth annual Insurance AI and Analytics USA Summit, taking place May 2-3, 2019, at the Renaissance Downtown Hotel in Chicago. Expecting more than 450 senior attendees from across analytics and business leadership teams, the event will explore how insurance carriers can harness AI and advanced analytics to meet increasing customer demands, optimize operations and improve profitability. For more information, please visit the website.