Download

Next-Gen Property Risk Data and Analytics

Most property risk models rely heavily on ZIP code. Yet, technology and data exist today to evaluate more than 1,000 risk data points for every single property in the U.S.

Person holding graphs that have been printed

The future landscape of the P&C insurance market is being largely determined by the technology decisions companies make today. Is your business keeping pace with the rapid advances in property risk data and analytics?

The digitization of core processes, the increasing migration to the cloud and the explosion of insurtech companies has set a foundation for a new generation of property risk data and analytics. The accessibility and quality of property data has exploded – and simultaneously become drastically more cost-effective. The ability to store or access such data via the cloud and the ready availability of APIs to connect to sources of data has translated into an explosion of data that can be instantly delivered and leveraged by underwriters. 

Still, many insurers are operating in the past. To illustrate, consider that 30% of legacy systems do not have data on the location of a property’s nearest fire station or fire hydrant. Yet, that data is readily available and has a significant impact on the estimated extent of potential fire damage to a property. Or consider that lightning risk is often not considered in most property risk evaluations, yet the data for that $1 billion claims category is also readily available. 

Most current property risk models rely heavily on data and evaluations based on a property’s ZIP code, a practice that dates back to the 1980s. Yet, technology and data exist today to evaluate more than 1,000 risk data points for every single property in the U.S. And these geospatial hazard ratings are much more comprehensive and precise than the current zip code and census block-based practices.

In a recent study, the global consulting firm McKinsey reported that best-in-class insurers are “putting distance between themselves and competitors” by applying advanced data and analytics in underwriting. They cite these insurers reducing loss ratios by three to five points, increasing new business premiums by 10% to 15%, and improving retention by 5% to 10%.

See also: Biggest Operational Risks of 2022

As the McKinsey report states, “external data is the fuel that can ignite the value of analytics.” By leveraging next-generation data and analytics, insurers can gain deeper insight into risks across the insurance lifecycle, from risk selection to pricing: 

Risk Selection - Through access to internal data and integrating the right mix of external data, and then integrating that data seamlessly into the risk-selection process, insurers can better screen applicants. Insurers can classify applicants using risk models based on their underwriting principles to determine whether to cover or renew a client – and determine the amount of premium they should offer to that client. With next-generation data and analytics an insurer can select good risks and avoid the risk they do not want to underwrite. Of course, the idea is not to eliminate losses completely but to eliminate highly identifiable and highly probable losses. 

Prefill - Another point of the customer journey that can be made significantly more efficient and effective using the right next-generation data is in the interview and screening process. Using traditional data systems, the screening and interview process can be cumbersome, but with next-generation data integration, you can match and prefill data for prospects and customers quickly and inexpensively. Minimizing the number of questions, you need to ask a potential customer or client can dramatically help speed and smooth the screening and sales process.

Pricing - With quick access to and integration of the right internal data, and cross-analysis with a broad array of external data, insurers can more effectively make their case to regulators on pricing – and can more accurately and appropriately price policies to reflect the actual inherent risk. With most current systems, a property owner in a ZIP code with an F wildfire rating, perhaps a home in an urban area of that ZIP code, likely pays the same premium as a home that is actually in peril of a wildfire due to its proximity to forests/wild areas with dry brush. If a customer lives in a significantly more fire-prone property, they should be paying more than those in a low-risk property.

Targeted Marketing - Marketing is really one of the almost untouched or greenfield areas where insurers have yet to apply advanced data and analytics. Being a leader in the application of next-generation data in marketing can really make a competitive difference, particularly for small and medium-sized insurers. Marketing should really be seen as the starting point of risk selection. If you are smarter and more targeted about whom you market to, you are going to be able to produce stronger, less risky and more profitable leads.

There is tremendous value in closely integrating greater property risk data and analytics into the underwriting process to drive greater insights. And the time for updating your property risk data and analytics is now. Being ahead of the curve can be a real competitive advantage.


John Siegman

Profile picture for user JohnSiegman

John Siegman

John Siegman is the co-founder of Hazard Hub, a property risk data company that was acquired by Guidewire in mid-2021. He is now a senior executive at Guidewire helping to lead the direction of the HazardHub solution and guiding P&C insurance clients in innovating their data integration into critical processes.

An On-Ramp to Digital Auto Claims

No one is happy with the current, cumbersome approach to auto claims -- and a key technology has finally arrived that will digitize and speed the whole process. 

Overhead view of a freeway with on-ramps at sunset

The insurance industry has long sought a solution to address the lengthy and costly delay between the time of accidents and the notification of a claim, also known as FNOL (first notice of loss). 

Successful integration of crash detection into claims automation efforts has gone unrealized because of difficult technical challenges, lack of innovation and existing solutions that hinder adoption. Fortunately, there are technologies being introduced into the market that are game-changing, because they offer a combination of low-cost software-as-a-service (SaaS) models, smartphone scalability, artificial intelligence, intelligent crash data and reliable crash detection at all speeds, including slow ones. In fact, new crash detection technologies could more aptly be referred to as FNOI (first notice of incident), in which a crash alert is made on a near immediate basis. 

The auto insurance claims industry has long been attempting to transform the existing claims processing model, which is a highly complex, manual, paper-based and labor-intensive process that is not consistently meeting expectations. No one is happy with this model, from the boardroom to the policyholder. However, transforming to a modernized, digital and self-service claim model has proven challenging for a host of reasons. 

In recent years, the industry has had mixed success with attempts to digitize claims via app or on-line options for starting a claim, for using photo estimating and for handling e-payments. Barriers remain to advance process transformation and customer adoption, especially around FNOL. There are numerous process steps to be addressed, including the initial detection of an incident. 

A Digital Claims On-Ramp

Promising technology and expertise to create FNOL solutions was recently announced by CCC and Sfara. With the digitization of the front end of the auto claim process, insurers can now extend and accelerate claims automation across the entire claim process, from crash to payment and subrogation, finally realizing the goal of a straight-through-processing.  

There are significant benefits to straight-through processing, for both policyholders and the companies that support them, be those automobile OEMs or auto insurers. The desire to see these benefits realized has driven innovation and will accelerate adoption. 

U.S. auto insurers could experience $15 billion of savings through improvements in loss costs and loss-adjustment expenses. This figure is based on an estimated $1,000 in savings through automation for the 15 million insured auto accident claims that occur annually

These savings arise from reductions in loss adjustment expense (LAE), including claims intake costs, attorney involvement and injury causation investigation. Improvements in loss-cost management include more accurate damage assessment, fraud detection and deterrence and higher direct repair program use. 

The data captured in a mobile crash detection program, when integrated with insurer and partner claims management platforms, will dramatically improve and compress the auto claims process cycle and costs in a number of ways; 

  • auto damage assessment (estimating)
  • repair parts identification and procurement
  • salvage management of total loss vehicles
  • personal injury assessment and causation
  • liability determinations
  • fraud detection
  • audit rules

See also: First Steps to Digital Payments Processes

Barriers to UBI Adoption

After approximately 20 years in-market for telematics, only 16% (albeit growing) of U.S. private passenger auto drivers have participated in some form of telematics auto insurance programs from pay-as-you-drive (PAYD), also known as pay by the mile, and usage-based insurance (UBI), which monitors and uses driving behavior for some time to determine auto insurance premiums.

Contributing to this slow adoption are factors that we think ultimately disqualify UBI as a possibility for claims automation:

UBI Technologies Are Not Sophisticated Enough

Many UBI programs have begun to deploy, or at least study, crash detection but have noted some inherent limitations in capability, including challenges with: the associated necessary telematics hardware (dongles, tags and sensors, Bluetooth pairing) and an equally challenging deficiency with false positive suppression. In some cases, it’s widely known that false positive crash detection rates of 30% are standard, especially at lower speeds, where the majority of accidents occur. That issue renders UBI technology essentially useless to claims processing.

They Aren't Designed as Claims Support

UBI programs were designed to be more of a growth strategy -- "switch and save" -- and less to support claims processes. As a result, their supporting technologies didn’t require precision or much sophistication, both of which are required for automated claims processing.

For instance, in assessing driving behaviors for UBI programs, the data is averaged over trips and time. As a result, catching even “most” trips is not necessary. So, it also wasn’t necessary to have exacting "trip start" technologies. But if your goal is to capture driving data leading up to a low-speed collision, your “trip start” tech must be highly sophisticated and fine-tuned. 

Additional Hardware Is Required 

Anyone running a UBI program that required the connection of dongles, tags or other aftermarket devices quickly learned just how difficult it was to get customers to use them, no matter how easy they were to install. Even ardent hand-raisers for UBI programs offering savings and discounts just didn’t install the devices, no matter how many times you reminded them. The result: high friction and low activation. Plus, costly communications programs to urge customers to install the devices. 

UBI programs entail substantial complexity, parts supply, manufacturing, transport, inventory, customer shipping and returns. Even though these can be managed, the complexity can damage customer experience and increase churn. Getting customers to install equipment continues to be a hurdle, and the equipment cost -- ranging from $12 to $60 per driver -- needs to be amortized.

See also: Past, Present, Future of Telematics, UBI

Smartphone Technologies Offer Best Market Penetration

The essential differences between UBI and other crash detection solutions are the underlying technology, as well as their intended purpose.

Smartphone-enabled crash detection will not only simplify the claims process for drivers following an auto accident and provide greater safety and convenience to those involved but will become the “on ramp” to accelerate claims digitization by insurers. This will make digital FNOL a reality and will speed the arrival of the long-sought vision of “straight-through-processing,” starting with the ability for insurers to contact their policyholders following a detected incident. On average, customers wait up to five days to eventually make a claim. By reversing this current sequence of events, insurers can efficiently and quickly reach and serve their customers in their time of need.

As evidenced by the success of the App Store and Google Play, the smartphone app is the ultimate distribution mechanism, with over 140 billion app downloads in 2021 in the U.S. alone. Innovation starts with new smartphones and always improving sensors that are constantly updated by the end user (as they upgrade models). Smartphones have become incredibly powerful sensing and processing devices to support 3D gaming and numerous uses. The introduction of mobile-based AI, capable of operating directly on these phones, is accelerating innovation at an incredible pace.

Using Intelligent Collision Data to Drive Claims

The CCC Intelligent Solutions partnership with Sfara offers insurers real-time access to mobile crash detection data designed to accelerate and enhance claims outcomes beginning at FNOL. CCC's cloud will ingest from Sfara multi-sensor accident intelligence data, including on speed, point of impact and direction of force, and other pertinent high-frequency data. Participating customers can then leverage that data through CCC's existing workflows, powering auto physical damage claims and supporting the customer’s ability to estimate damage severity, schedule timely repairs and beyond, creating more straight-through experiences for drivers and insurers. In addition, accident data could also inform potential likelihood of fraud and crash reconstruction.

This announcement reinforces the apparent technical strides made in smartphone accident detection viability, so we took a closer look. Sfara’s website and writings indicate the use of a multi-layer AI approach, first with “ultra-low speed impact detection,” which is then complemented by a second layer of false positive suppression technology called “ESP” (extended sensor processing). Low speed and consequently lower severity/less damage is a distinguishing factor here, because the majority of auto collisions are indeed at lower speeds and are generally below the average collision repair cost of $3,800.

Although comparatively higher speed/severity accidents are cited in the marketplace as common crash detection use cases, it is actually more difficult to detect moderate to minor incidents; false positive crash alerts have been unacceptably high. Insurers would rather not contact their customers only to find the crash alert was a “false alarm,” phone drop or some other scenario unrelated to driving a car. Thus, crash detection has been limited to those more severe accidents. 

Low speed/high frequency claims are of great interest to insurers when it comes to serving customers and wringing out inefficiencies while modernizing the claim process. In other words, reliable low-speed crash notification has been the missing ingredient in promoting digital FNOL adoption, which, in turn, will further enable digital self-services throughout the entire process. Additionally, gaining more immediate telematics information on the cause of an accident can streamline subsequent claim investigation phases. The ability to immediately validate the essentials, such as time, date, location, speed, direction, vehicle information, etc., amplifies the value to crash detection.

We fully expect smartphone-only crash detection availability and accuracy to enable widespread adoption of automatic accident notification and claim activation by both insurers and drivers in the near term. Essentially, this is FNOL in reverse; carriers can confidently contact the policyholder to offer support and open a claim.

This low-cost capability coupled with ubiquitous smartphones will result in a breakthrough in the auto claims process and related services. The benefits of reliable real-time crash detection and digital FNOL are just too compelling to ignore, and carriers that do not offer it will be at a competitive disadvantage.

Let the next step toward end-to-end claims automation begin.


Stephen Applebaum

Profile picture for user StephenApplebaum

Stephen Applebaum

Stephen Applebaum, managing partner, Insurance Solutions Group, is a subject matter expert and thought leader providing consulting, advisory, research and strategic M&A services to participants across the entire North American property/casualty insurance ecosystem.


Alan Demers

Profile picture for user AlanDemers

Alan Demers

Alan Demers is founder of InsurTech Consulting, with 30 years of P&C insurance claims experience, providing consultative services focused on innovating claims.

The Death of 'The Robinsons'?

"The Robinsons." who bundle home and car insurance, represent the crown jewel in customer lifetime value. But the segment is very much at risk.

Person holding a child looking at the sunset

The P&C insurance industry has always been a little obsessed with using personas as a proxy for desirable customer segments. None of these has received more attention than the Robinsons. Based on a consumer profile that contains households who own a home and a car and place the insurance on both with the same insurer as a bundle, the Robinsons represent the crown jewel in customer lifetime value. Accordingly, they have become the objet du désir for carriers of every type.

Progressive disclosed as much in its Q2 earnings report, calling out the Robinsons segment by name as a key growth target. However, despite the interest in courting the Robinsons and, more generally, getting high-value customers into bundled insurance relationships, the segment is very much at risk right now. Thanks to a combination of rising premiums and increased adoption of usage-based insurance (UBI) programs, customer retention in this prized segment is at risk of a divorce.

Bundlers at Risk

In fact, according to the J.D. Power quarterly Insurance Loyalty Indicator and Shopping Trends (LIST) report, quotes on new auto insurance policies have held steady at 12% during the past two quarters as customers continue shopping for alternatives to steadily rising premiums. On top of that, we’ve found that customer frustration with those rising auto premiums is driving significant declines in customer satisfaction with home/auto bundles, with 31% of bundlers now saying that they “definitely will” switch their home insurer if they switch their auto insurer after an insurer-initiated auto premium increase.

Put simply, the data is telling us that insurance bundlers—without question the most valuable segment of P&C insurance customers—are starting to fall out of the Robinson family tree as they search for cheaper alternatives. And that’s a problem.

You see, insurance executives have been working for decades to find ways to deepen customer relationships at a household level, and the current customer reaction to rising prices flies in the face of that strategy work to expand customer lifetime value. 

Companies that have invested in analytics that help them understand insurance decision-making for every household situation on a near-real-time basis have committed to a strategy to find ways to react quicker to changes in the most valuable customer households to retain them and sustain a strategic profitability advantage. 

The most advanced companies in this space have been pushing to make every interaction a moment of both trust and advice where they actively listen for changes in a household to deepen the relationship with the customer. Sometimes that advice is “you need lower insurance costs” – enter UBI, where your amount of coverage/insurance can stay the same or go up, while you pay less due to driving less, driving better or both.

See also: From Risk Transfer to Risk Prevention

Keeping the Robinsons in the Family Tree

If insurers are interested in stemming the tide of defections among the Robinsons segment, they need to get serious about tracking every aspect of the Robinson customer experience and act quickly to close any gaps where they are losing them to cost pressures and other variables. 

In a word, they need to get serious about personalization. The ability to make great advisory recommendations often comes at key transition points in the life stages across insurance neighborhoods. What’s new is the focus on the continuum of household living situations expressed in terms of the people living in spaces, with cars and drivers. 

This vaults the simple view of four basic types of households to upward of 48 different combinations that describe an “insurance neighborhood” in every realistic scenario—from a family with a basic home-and-car bundle to a multi-generational household with cars, boats and home policies—and can be seen uniquely.  Even someone who is currently without an insurable asset can count as part of a family tree. There are a lot of reasons for being between homes and not needing a car that don’t mean you are not still a member of the family, a loyal customer or someone who may become one.

This new expression of personalization extends the industry’s reach to achieve their obsession with potential super bundlers and lifetime loyalists. Yet, now more than ever, product decisions of necessity may be putting strategic retention goals at risk. That focus needs to shift from short-term, product-centric gains to long-term, lifetime value. 

Increasingly, the path to delivering on that goal is rooted in real-time, granular customer data that lets insurers understand the detailed behaviors occurring within their most desirable customer segments and intervene before it’s too late. Retaining customers when they are getting a different car or a different house or driving a different amount shows personalization that counts

An Interview With Stephen Crewdson

Stephen Crewdson, senior director of global business intelligence, insurance, at J.D. Power, says its latest survey of agents and brokers finds improvement in satisfaction with how carriers are treating them, especially in commercial lines

Interview with Stephen Crewdson

ITL

When you take a historical view, you see that a lot of the initial enthusiasm about the insurtech movement had to do with disintermediating agents and brokers. The idea was, let's go straight to the customer and cut a bunch of expense. But at InsureTech Connect in Las Vegas in early September, carriers and insurtechs were falling all over themselves to talk about all the things they were doing to court agents and brokers. Your survey, which found agents increasingly satisfied with carriers, seems to me to support the idea that the thinking has shifted throughout the industry. Is that what you see, too?

Stephen Crewdson:

I think I would agree with that. Certainly, we're seeing carriers recognize the importance of the independent agency channel. That importance is not diminishing. If anything, it may be increasing in some areas of business.

We also survey consumers. And we've seen that the optimal experience for a customer is what we call an omni-channel experience. If I need to, I can reach my agent. If I deal with a call center, the person there is equipped and empowered to handle my issue. If I need to do something very simple, like making a payment or confirming that a payment was received, technology can handle that. I don’t need to wait on hold while the agent is doing something else to just have a 20-second transaction.

Carriers are also seeing growth through the agency channel. So, I think the independent agency channel is still very strong and isn't about to go anywhere.

ITL

When you talk to agents and brokers, do they mention anything in particular that they think the carriers are doing better?

Crewdson:

In commercial lines, we saw an increase in agents' satisfaction with carriers for five out of the six factors we track. Satisfaction with the quoting process is up. So is satisfaction with product offerings, customer service, support communication and commission. The only factor where we didn’t see improvement was in claims, which makes sense given all the bottlenecks in supply chains. Think about used car availability, new car pricing, all the things on the auto side that go into a claim that are extending the timeline for getting them settled.

Agents are basically telling us about the problems we’re already hearing about from customers.

In personal lines, the only area where there was an increase in satisfaction was in commission. Everything else was flat or down a bit.

Larger agencies tend to be much more satisfied than smaller ones on all six factors.

ITL

That makes sense to me, given the amount of technology that's being injected into the sales process. Bigger agencies would seem to be better able to receive that kind of technology to improve connections.

Crewdson:

There's also a personal element. More agents are saying they had an in-person interaction with a carrier this year – nearly one-in-three, where the highest number we’d ever seen before was 23%. There seems to have been a pent-up demand, and that face-to-face interaction does happen more often with the larger agencies, obviously, because there's more business being placed.

ITL

What do agents want to see now from carriers?

Crewdson:

The top thing is ease of navigation. A lot of carriers are going through platform upgrades, and it can be difficult for agents to keep up with all the changes that are happening. Even with a steady platform, the leading ask that agents have is easier navigation in the technology.

They also want more clarity around the information that's being provided to them, and they want the ability to customize it—maybe I can tag some information so it’s more readily accessible to me and don't need this other information over here that isn't relevant to what I'm selling.

ITL

Any other themes that we should explore?

Crewdson:

We've found that there’s a distinct pattern related to tenure with a business – not how long you’ve been an agent, but how long you’ve been at an agency or working with a carrier. Agents with less than two years of tenure or more than 10 are much less satisfied than those in that sweet spot between two and 10 years.

ITL

Interesting. When you’re new, you’re maybe floundering, and after 10 years you’re maybe jaded. What can carriers and agencies do to increase satisfaction—and, I imagine, productivity—for those outside the sweet spot?

Crewdson:

One is to make sure support material on the website or dashboard is easy to find—that reduces what you called the floundering. Another is to have at least one training session a year—everybody wants help figuring out how to build their book of business.

For longer-tenured agents, make sure you're flexible with designing and onboarding policies. Also, offer non-commission incentives such as trips, which are very popular. Provide a lot of communication with the agents and support for specific targeted industries for those agents writing commercial lines.

If you do all those things, agents register even more satisfaction than they do in that two- to 10-year sweet spot.

ITL

Sounds like great advice to me. Thanks, Steve.

And, for those who want more information, here is the full study.


Insurance Thought Leadership

Profile picture for user Insurance Thought Leadership

Insurance Thought Leadership

Insurance Thought Leadership (ITL) delivers engaging, informative articles from our global network of thought leaders and decision makers. Their insights are transforming the insurance and risk management marketplace through knowledge sharing, big ideas on a wide variety of topics, and lessons learned through real-life applications of innovative technology.

We also connect our network of authors and readers in ways that help them uncover opportunities and that lead to innovation and strategic advantage.

What Happened to Insurtech?

The first round of insurtech appears to be over. The next round, in its formative stages, promises massive gains in efficiency, real-time quoting and binding, and more. 

Image
What Happened to Insurtech?

I’ll always remember the first question I got on my first book tour. This was in 1993, after I’d published “Big Blues: The Unmaking of IBM,” drawing on my seven years of covering the company for the Wall Street Journal and chronicling how the most admired and most profitable company in the world had lost its way. I started the tour with a radio host for morning drive time who asked, in that highly caffeinated way, “Okay, Paul, what the HELL happened?”

That’s the question that has been rattling around about insurtech this year, as the big publicly traded companies have seen valuations crash and as funding has become iffy. So, to try to get a handle on what’s going on, I chatted recently with Chris Cheatham, now at Bold Penguin, whom I’ve known for years and who has lived most of the insurtech experience over the past decade.

His take is basically that the first wave of insurtech is done. It lasted from about 2011 to 2021, and “it was pretty good. We got some stuff accomplished.” He thinks a second wave will last from roughly 2022 to 2032 and is way more optimistic about where insurtech goes now. “I think you’re going to see a hockey stick-like improvement in terms of efficiency driven into the insurance industry,” he said. “I think the AI stuff is really going to take hold.”

He says underwriting will get way faster. (Don’t get him started on a submission he reviewed recently for a cyber policy that was so long he couldn’t view the email string in Gmail and had to download it via a data service.) He thinks MGAs will drive all sorts of innovation, that the messiness in today’s approach to lead generation will get cleaned up… and much more. But don’t use the word “disruption” around him. He’s allergic to the word “disruption.”

Excerpts from our intriguing interview follow.  

First, some background on Chris. He founded a startup called RiskGenius, which helped carriers and brokers analyze and compare policies and coverage language. That small-startup experience became a big-insurtech experience when Bold Penguin bought his company two years ago and he became a senior executive there. (His current title is "product evangelist.") He then went through an insurtech-incumbent merger, when American Family bought Bold Penguin last year. So, he pretty much covers the waterfront. 

Now, his observations:

On Disruption in the First Round of Insurtech

“There was no disruption. The biggest disruption was just allowing other parties to enter the process, the APIs [application programming interfaces] that let banks and other non-insurance partners pass leads into companies.”

On Speeding Up Underwriting

“Let's imagine all the friction being removed for an underwriter. They receive a submission, which goes through an extraction tool. The data then is hit against a third-party database to confirm it, flag issues in real time and present it all to the underwriter immediately.

“I think underwriting will speed up a lot. You won’t have these giant email threads, where people are just trying to figure out what's in the submission documents. I will never forget the time somebody forwarded me a cyber submission packet that was basically just an email thread that was so long. I couldn't even access it in Gmail. It was the most confusing thing I've ever seen, just because people had been going back and forth via email for so long.”

On Real-Time Quoting

“It can’t be that far away – 10 years maybe – that you’ll type in the [commercial] risk you’re trying to insure, and you’ll get quotes in real time.”

Currently, he says, the Bold Penguin platform will prompt you with options as you type in that you’re trying to find insurance for, say, a roofer. “Let's say I get three options. If I click on one, immediately I see the carriers that will give me a quote. I can then dig in, maybe specifying a roofer for manufacturing buildings or with solar panels on the roof. The eligibility of carriers updates in real time. What we'll get to is seeing the price in real time. Today, you still would have to have an underwriter review everything after a quote is offered, but eventually we’re going to get to quoting and binding in real time.”

On the Future of Brokers

“There will definitely be people that try to remove that broker. But we're really far off from a florist, say, understanding which insurance to buy for the business. Until you solve that problem, you're going to need agents. And to solve that problem you’re going to have to unwind hundreds of years of case law and hundreds of years of insurance clauses, then figure out how to explain those in real time to people so they know what they need. It’s really tricky.”

On the Future of MGAs

“There's just more capital looking for alternative insurance distribution models, so somebody who sets up an MGA can find somebody to back them more easily than in the past. APIs make it simpler, too. It’s just so much easier to connect a technology platform, like surety bond distribution platform, to some carrier that has their digital appetite figured out. You can go as niche as you want because it’s easier to spin up these MGAs. I just saw someone offer a Metaverse insurance product, right?”

On Lead Generation

“In both personal and commercial lines, lead generation is this really weird, murky place. Everyone complains about bad leads. Everyone seems to be recycling a lot of leads. And Apple just complicated the issue by increasing security and privacy on phones. I don’t know what the solution is, but I think there will be a fundamental shift on lead generation.”

On Advice for the New Round of Insurtechs

“I would completely stay away from the word ‘disruption’ when you're talking to insurance people. Building a new customer experience is not disrupting insurance, and I cringe when I hear people making claims of disruption. Don't tell people that, because they don't want to hear it. If you’re really going to disrupt some part of the industry, just go do it.”

 

Cheers,

Paul

 

Security Requires Change Management

How one carrier (the Hartford) rolled out MFA to its agents. What was measured, what was learned and how this shapes the future of the industry’s data security.

 

Person typing on a computer at a desk

Multifactor authentication (MFA) is no longer preferred security protocol—it is a must. But what makes for a successful MFA implementation? It’s a pretty daunting idea at most organizations, but with the right plan, you can make it work and benefit from it.

At the Hartford, we accomplished both internal and external use of MFA and had a smooth time of it. Our success was largely attributable to an excellent change management strategy and thoughtful execution. I’d like to share some of what we did to help others with their MFA onboarding process.

The first step was putting a lead in place who was an expert at change management. We decided to start employing MFA within our own company first—a sort of test drive before making the ask of our broker partners. Our change leader assembled a team, which had representatives from every department at the Hartford. They did a deep dive into our structures, networks and capacity before ever promulgating even a draft of a solution. That months-long research effort gave them time to learn where our strengths were and where there might be a learning curve.

In the initial stage, we developed a set of benchmarks—key performance indicators—such as the acceptable number of attempted logins, successful MFA logins, denials of access and help desk calls. After extensive communication to and education of our staff on the importance and use of MFA, we deployed it internally. 

As MFA went live inside the Hartford, we measured against our key performance indicators to identify problems so we could target corrections to smooth the process before external rollout. We took comments and sought feedback. By the time we moved to external rollout, pretty much all the bugs were removed, and we had a good sense of MFA’s effect on logins and help desk demand, among other systemic processes.

The next step was rolling MFA out to our broker and other partners. 

External rollout

Once we had the internal MFA running smoothly, we turned to a select group of agency partners who were willing to work hand in glove with us to implement MFA as part of their interface with us.

We spent time educating the brokerage principals and tech staff about what would be required. That included making sure everyone received their own user ID and password, conforming their systems to certain technical requirements and going through a validation process. All of this is fairly straightforward for most IT staff, and our team was versed on training in case it was needed. 

We started communicating with our partners about five months before we turned on MFA. We also had a dedicated help desk, and those specialists would reach out to any group that was having a problem and help get them over the hump. Working with this first batch of agencies gave us good insights for how to improve our outreach as we expanded our efforts to more users.

On the user side, most people are already familiar with MFA through their bank or another institution, such as a computer or cell phone company, so the concept isn’t a shock to the system. If the hidden, technical interface is working well, most people do little more than grumble about the extra step. Then it becomes second nature, and those who think about it are grateful for the added layer of security.

We took a tiered approach. We onboarded 5,000 users one week, then 10,000 the next, etc., bringing on about 90,000 users, the whole time confirming our key performance indicators were within our benchmarks. To ease the actual go-live experience, we sent a daily countdown message to users when they signed on to our systems: “30 days until multifactor authentication will be required for sign-on,” etc. When you do that enough, people can’t wait until that message is gone and MFA begins!

One thing that was particularly important was making sure this was an enterprise effort. For example, producers might talk to only their favorite underwriter, so that person would need a way to escalate any concerns from the producer up to the change management team. Providing that pathway was a very successful and smart thing to do and allowed us to prevent having “silent” pockets of discontent that could manifest in a reduction in business. I’m happy to say there were no serious problems at all, but it was good that everyone on our side knew how to pass along agency information if it did come in.

By taking metrics seriously, we were able to compare implementation across our partners: MGA versus retail agent versus payroll company. Our hypothesis was that there wouldn’t be a difference between the segments of our external partners, and that was true. But that might not be the case with every organization that launches MFA.

See also: 4 Technology Trends for 2022-2023

Lessons learned

We found that both internal and external users were very happy to have a long lead time, a lot of transparency into the reasons for MFA, help with the technical aspects of implementation and suggestions for educating staff. We also decided that some of those key performance indicators were worth integrating into our monthly evaluation of how things are continuing with our agency and other partners. 

We think both the KPIs and process-transformation efforts gave us an improved methodology for other change management in the future, so that’s a real bonus from this process. 

We also know without a doubt that MFA will give us added protections from some of the most common cybersecurity breaches. If someone does get hold of a username and password, having that secondary layer of authentication should put a halt to unauthorized access to our systems. All in all, MFA is beneficial for cybersecurity at a time when bots and bad guys are working 24/7. I hope your rollout goes as well as ours.


Jim Rogers

Profile picture for user JimRogers

Jim Rogers

Jim Rogers is assistant vice president for sales and distribution technology strategy at the Hartford and president of the ID Federation board of directors. ID Federation is a nonprofit coalition working with agency management systems to standardize IDs, passwords and MFA across the industry. 

3 Ways DE&I Can Boost Agencies

Data has shown that, compared with individual decision makers, diverse teams make better decisions 87% of the time.

People talking around a work table

With nearly half of Generation Z coming from racial and ethnic minorities, the workforce of America is entering a new era. If diversity, equity and inclusion (DE&I) haven't been a priority at your agency, it's time to start thinking about it.

Once a year, as an employee, you may receive an email asking you to attend a meeting with a guest speaker or a training video about DE&I in insurance. Once these activities are over, your day carries on, and the topic may not be mentioned again for another year. If this is you, then DE&I is a moment for your company, not a movement, and it's going to be slow to change.

With a study of North America, Latin America and Pacific Asia showing that 51% of companies have not set diversity and hiring goals, it's clear that many global employers don't believe that it affects their business. Nonetheless, DE&I significantly affects the insurance industry, affecting employees, customers and profits.

Let's look at how you can start making a difference and address DE&I within your insurance agency.

Why You Should Care

As insurance agents, your main job is to represent your client, advise them on the coverage they need and help them find the best policies from the right carriers, and who wants to work with an agent who doesn't understand the client's diverse employee and client base? This isn't just about ignoring HR hiring objectives; this is about limiting your business opportunities if you fundamentally ignore DE&I.

If you want a happy, productive and stable workforce, then DE&I needs to be on your radar: A survey by CNBC showed that nearly 80% of employees wanted to work for a company that values DE&I.

Diverse and inclusive workplaces are not only beneficial for HR departments by reducing the risk of unconscious bias but have also proved to boost worker productivity. Data has shown that, compared with individual decision makers, diverse teams make better decisions 87% of the time. Due to the blend of background, culture and experience, truly balanced and diverse teams can be the competitive advantage your company needs.

Start Measuring

You can't get your ducks in a row without knowing what you're working with. So, the first thing to do when looking at DE&I within your own company is to measure. Businesses need to analyze their representation statistics to discover areas in need of improvement. Start with asking questions like: What employee segments do we want to define (age, race, gender, ethnicity)? How do we want to measure those segments (by department, level or location)? Then you can answer the question--are we a diverse workplace? Once you've got the answers, you can start analyzing why and start driving change.

Underrepresentation often comes down to unconscious bias in the hiring process, which affects every industry--and insurance is no different.

The only way to address our biases toward characteristics is by recognizing them, which is particularly important for company leaders and those involved in recruitment. One way to discover your bias is by taking the Implicit Association Test. This measures your automatic preferences by measuring the time taken to classify concepts into two separate categories. For example, the test would measure the time taken for an individual to classify gay people with ideas like good and bad.

Businesses can also provide training and workshops for all team members on a company-wide scale. It's important that teams share their bias with others in a safe space so everyone can learn from other people's views and opinions.

See also: Underwriting Small Business Post-COVID

Employee Resource Groups

Another opportunity to encourage DE&I is engaging with employee resource groups (ERG). By encouraging employees who are passionate about a common characteristic they share, whether it be race, ethnicity, gender, religion or lifestyle, you are fostering a supportive and inclusive environment.

Providing groups with executive sponsorship allows them to elevate the conversation around the needs and interests of that group and provides them with essential funding. Companies can take inspiration from others, like Liberty Mutual Group, which sponsors seven different ERGs within their business.

ERGs are essential for insurance businesses, as they can be a safe space for new employees to meet like-minded people and work as a form of casual mentoring and professional development. The resource groups can also be a space to collect feedback about specific issues, which can be presented to upper management.

For those within the insurance industry looking for ideas and advice--such as establishing ERGs--The Enlightened Agent podcast is an interesting and educating listen. In each episode of the DE&I Series, different professionals discuss how to improve and enhance the insurance industry, DE&I and businesses that shape the world. By listening to business leaders' stories and guidance, you can implement the best DE&I practices in your company.

Embracing Neurodiversity

More recently, companies have become increasingly aware of an untapped talent pool--the neurodivergent job seekers. Neurodiversity refers to people with different neurological functions, meaning they process and learn in an atypical way. Many other conditions could fall under this category, including autism, ADHD and dyslexia.

People with these conditions are massively underrepresented within the job market, and unemployment for neurodivergent adults can be as high as 40%. Unfortunately, employers have long held the view that people with these conditions are not conducive to the workplace.

Brains that work differently actually give companies a competitive advantage. Neurodivergent candidates can be highly efficient at recognizing patterns, mathematics and memory tasks. By recognizing and celebrating those who fall under this category, you can start demystifying the stigma associated with those behaviors.

One way to begin widening your talent pool to include neurodivergent candidates is through partnerships. For example, Marsh McLennan, an insurance broking and risk management giant, partnered with the national non-profit, Ambitious About Autism. Through this three-year agreement, they aim to make more employers autism-confident and provide hands-on insurance internships to young autistic job-seekers.

See also: How to Provide Better Coverage for Employees

Being interested and caring about DE&I within your company is not only an excellent ethical decision but also good for business. By 2044, more than half the U.S. population is projected to belong to a minority group, so by embracing and celebrating diversity within your company you are future-proofing it, too.

Einstein stated that the definition of insanity is doing the same thing over and over and expecting different results. So, how can we revolutionize the insurance industry if DE&I is still ignored and companies keep hiring carbon copies?

It's time for change.


Jason Keck

Profile picture for user JasonKeck

Jason Keck

Jason Keck is the founder and CEO of Broker Buddha, which transforms the application and renewal process to make agencies far more efficient and profitable.

He is a seasoned technology entrepreneur and brings 20 years of experience across digital and mobile platforms to the insurance industry. Before founding Broker Buddha, Keck led business development teams at industry unicorns, including Shazam and Tumblr.

A Harvard graduate with a degree in computer science, Keck also worked at Accenture and Nextel.

2022 Tech Survey Results: How Carriers and MGAs Address New Challenges

A new survey from OneShield looks at some of the challenges the industry faces, and why the solutions may not be as out of reach as they seem.

Woman Multitasking

In this newly released State of Technology Survey report, carriers and MGAs express challenges to keep pace with technology and customer demands.

Insights include:

•    the greatest challenges facing insurers in the marketplace
•    critical strategies to meet customer demands
•    technical roadblocks to innovation
•    where insurers will prioritize technical investment
•    the types of tools most likely to be employed

For insurance professionals interested in the state of technology within the P/C insurance sector, this report offers insights into insurer priorities, challenges, and solution strategies for 2022-2023.
 

Sponsored by ITL Partner: OneShield


ITL Partner: OneShield

Profile picture for user OneShieldPartner

ITL Partner: OneShield

OneShield provides business solutions for P&C insurers and MGAs of all sizes. 

OneShield's cloud-based and SaaS platforms include enterprise-level policy management, billing, claims, rating, relationship management, product configuration, business intelligence, and smart analytics. 

Designed specifically for personal, commercial, and specialty insurance, our solutions support over 80 lines of business. OneShield's clients, some of the world's leading insurers, benefit from optimized workflows, pre-built content, seamless upgrades, collaborative implementations, and pricing models designed to lower the total cost of ownership. 

Our global footprint includes corporate headquarters in Marlborough, MA, with additional offices throughout India.

For more information, visit www.OneShield.com


Additional Resources

 

What's Driving Innovation for 2023?

Respondents of our 2022 Insurer Tech Survey, reported that their biggest challenges include keeping up with innovation, having sufficient IT resources and staffing to implement critical strategies, and limitations of infrastructure to address new opportunities. We've just launched our 2023 Insurer Innovation Survey, and it's a great opportunity to share your perspectives and predictions – and gain immediate access to the aggregated responses from your peers as they unfold. Please share your outlook today!

Take Survey Now.

Closing the Gaps: Expanding your technology ecosystem

The right strategic approach to technology ecosystems brings competitive advantages to forward-looking insurers. Learn how to creatively leverage third-party applications to enhance customer and agent experiences, enable automation, predictive risk modeling, and more.

  • The role of the digital platform in creating a unique market advantage
  • How digital leaders integrate ecosystem partners to engage customers, extend distribution and develop new business models
  • How nimble players get to market faster with innovative capabilities and products
  • Mission-critical APIs for success in 2022
  • Security and vetting consideration for potential third-party solutions

Read More.


 

How to Fix Data Deficit on Cyber

The imprecision on cyber vulnerabilities and how to price insurance are unnecessary. Data is readily available -- if you look in the right places. 

Paper with an umbrella and "INSURANCE" written on it next to a laptop

With cybersecurity insurance, as with cybersecurity, ignorance is not bliss. All parties in the contract, insurer and insured, need as much information as possible to make wise choices about coverage, pricing and payouts and to prevent unpleasant surprises. Unfortunately, they too often have more questions than answers.

Insurers frequently issue cybersecurity insurance policies based on estimates or even guesses rather than material data. They often lack the visibility or metrics that expose the policyholder’s vulnerabilities and risks or the controls the client is using to minimize the chances of a breach. 

As a result, these companies may charge the policyholder too much – or not enough, which could result in an increase later on. And without data and documentation showing evidence of properly applied and enforced cybersecurity controls and monitoring, the policyholder may find itself in a tight spot when filing a claim. 

If the policyholder can’t attest to the level of damage a breach has caused and measure its potential liability – whether personally identifying information was compromised, for instance – the policyholder may find itself waiting for the payout while the insurer investigates and determines what it is liable for. And then, depending on the results, the business may get less compensation than expected. 

The dispute that could result won’t be pleasant for anyone, and the claimant may suffer a premium increase that it can ill afford.

The insurance data deficit

None of these problems is necessary. Data is readily available regarding any organization’s cybersecurity posture and maturity, its areas of exposure and vulnerabilities, its cyber risks and the controls needed to mitigate them and more.

But companies may not know where to look for this information, and insurers often can’t advise them on how to find and use it. 

Having worked in the insurance industry and now in cyber threat intelligence, I’ve discovered four key actions that cyber insurance policyholders can take to correct this data deficit. In the process, companies can get the most value from their coverage for the least cost. Insurers could save resources, as well, as they issue policies with confidence in their customers’ ability to mitigate risks.

The cyber insurance dynamic duo: data and controls

Intelligent risk management, which is the essence of effective cybersecurity, involves two main components:

  • Data that demonstrates an organization’s areas of digital exposure – where hackers might try to get in – as well as the odds that each of these vulnerabilities will be attacked and the consequences if a breach occurs
  • Controls the entity puts in place to shore up its vulnerabilities, and evidence that it continually monitors and strengthens them as its threat surface changes.

By paying close attention to these two components, your organization can rest assured that it’s protected against cyber attacks and positioned to confidently transfer some of its cybersecurity risk to an insurer.

For the insurer’s part, it’s helpful to know that its policyholders know their risks and are reasonably secure and able to remain that way. Having this confidence can help insurers stabilize premium prices and better serve customers while protecting their own bottom lines. 

See also: Is Cyber Insurance on Brink of Collapse?

4 must-haves for a great cyber insurance outcome

To correct the cyber insurance data deficit, policyholders need diligence and documentation in these four areas:

1. Compliance with a cybersecurity framework. Organizations have a veritable alphabet soup from which to choose: CIS CSC, CBEST, FFIEC, ISO, etc. Which framework is best for you depends partly on the requirements of the industry you’re in. Many prefer the framework from NIST, the National Institute of Standards and Technology.

Often used for the protection of U.S. critical infrastructure, the NIST Cybersecurity Framework (NIST CSF) can be a helpful framework for any organization. Using the NIST CSF to measure security control compliance is voluntary, unless yours is a federal government agency or doing business with the federal government. On the other hand, not to be NIST-compliant could be risky business, indeed. 

NIST has already done the hardest part for you: provided a common set of rules and controls that can guide your enterprise to greater security. The NIST CSF is written in clear, concise language and is designed so that even those just beginning to use a framework to guide their cybersecurity program may find it helpful.

And because critical infrastructure is a national security concern, NIST compliance can assure policyholders and insurers that controls are in place to guard against the latest threats. What’s more, the insurer can more readily qualify the business for the proper level of insurance, saving time and, perhaps, money for both.

2. A thorough risk assessment. An increasing number of organizations have come to understand the risk-cybersecurity connection in recent years. Yet many fall short of conducting a full-scale cybersecurity risk assessment – one that weighs risks not only in their own organizations but up and down their supply chains.

With increasing threats and attacks directed at the digital domains of poorly protected and aging critical infrastructure, the U.S. federal government has seen the light regarding this need to supply proof of risk reduction. 

The White House’s February 2021 executive order makes a good start in encouraging better risk measures as it directs all public companies to assess their supply chain risks, including the digital supply chain. 

Software supply chain vulnerabilities have made headlines in recent years. In the so-called Solarwinds breach, cybercriminals planted malware in a security software update. The log4j vulnerability made its way into Apache’s software via library-sourced code. Both affected many thousands of businesses. 

From these incidents and others we’ve come to realize how paying close attention to third-party risks is critical to securing an organization’s systems, networks and data.

More recently, the White House in May 2021 issued an “Executive Order on Improving the Nation’s Cybersecurity” that requires providers of software to federal agencies to assess their supply chain vulnerabilities and risks.

These long-overdue mandates stand to affect not only federal contractors but all enterprises. And, although companies may cast a wary eye on the amount of work required, they will certainly benefit from having a more secure supply chain – and from having a better relationship with their cyber insurance company.

3. Quantification of cyber risks. “Risk” can be a nebulous term that means something different to each entity. Enterprise risk assessments tend to prioritize risks with the non-specific “high,” “medium” and “low,” referring to the likelihood of each risk’s culminating in an attack and the severity of the consequences should an attack occur.

But what are the specifics? Where, precisely, does the insured have a presence online – its “digital footprint”? 

How widespread are its vulnerabilities? Where are these weak spots located? How likely are attackers to find and exploit each of them? 

How resilient is the insured – how able to carry on business as usual – in the event of an attack? How much would an attack cost the enterprise? How would it pay these costs?

These questions might seem daunting because there are no easy answers. Yet insurers ponder them all the time for other forms of insurance. 

When writing an auto insurance policy, an agent will consider the make and model of the car, for instance: an expensive sports car is more susceptible to theft than an older utility vehicle and so might command a higher premium. If the owner lives or works in a big city, the likelihood of an accident goes up, and so might premiums. And so on.

Cyber risk quantification is still a nascent field and may seem intimidating. But a quality threat intelligence solution can help: It can not only uncover an enterprise’s vulnerabilities overall but also help entities prioritize which gaps to address. 

By alerting policyholders to any online chatter, threat intelligence can help them determine which business sectors are more at risk and which are less so (and where their organization stands) and whether cybercriminals are targeting a particular business or software, including their own..

If your business is a retailer and the day after the U.S.’s Thanksgiving is approaching – Black Friday, the biggest shopping day of the year – you’ll want to double down on your monitoring of dark web forums and any chatter that may refer to your business. 

Any area of your business or its suppliers could be a target – point-of-sale systems, for instance. If your threat intelligence finds that 100 dark-web forums have posts about plans to target these systems, you’ll want to tighten your security to ensure attackers can’t use them to get into your business systems or steal data from your customers. 

And believe it or not, telling your cyber insurer about threats and showing how you’ve dealt with them can actually reassure them. They’re more likely to feel confident about covering your organization if they know you’re vigilant against potential attacks and can prove it.

4. A good and measurable security awareness policy. How effective is your security awareness program? 

Do you know whether employees, business partners, third-party suppliers and others who are a part of your organization fully understand the need for security, as well as what your security policies and guidelines are, and how to follow them?

Again, policy effectiveness can be tricky to measure – but the proof is in the pudding. Keeping track of cyber events and incidents and how they are handled can tell you how well you’re getting your messages through to your people – who are at once the front line and the greatest vulnerability to any company.

See also: An Often-Overlooked Business Interruption Risk

How to gather the information you need

For cyber insurers, there’s no such thing as too much information. 

Yet you can find yourselves hamstrung when trying to set fair policies and premiums, forced to make guesses that aren’t well-educated. 

Your best bet may be to include my four "must-haves" in policy requirements for cyber insurance applicants, or plan to do the due diligence yourself for each company you insure. As I’ve noted, this model already exists with other types of insurance. Home insurance policies and premiums depend on a number of quantifiable factors, including, perhaps, the age of the home, the materials of which it’s made and where it’s located. Auto insurance, as we’ve seen, can vary in price depending on the type of vehicle, where and how the driver will use it and the driver’s own driving history.

Threat intelligence (TI) is one very effective tool for gathering the data you need to make informed cyber insurance decisions. Today’s best TI software scans three areas of the internet:

  • The public internet, to find where companies are vulnerable to attack
  • The deep web, which comprises email accounts, private messaging and other non-public digital arenas
  • The dark web, the shadowy underbelly of the internet where cybercriminals often discuss targets and tactics as well as sell illicit goods and hacking software.

Each of these areas adds a dimension to your threat picture, for a view that’s truly three-dimensional. This type of threat intelligence lets you see whether your business or that of your client is a cybercrime target. It can show where vulnerabilities and security gaps might be, as well as how much risk they carry, so your company or client can strengthen them. It can reveal which types of data your company or client lost in a breach for more accurate assessment of the damage and how to respond.

Having this 3-D view helps companies fulfill each of the four must-haves in my list. 

  • Auditors will be able to use the data to determine your compliance with security frameworks. 
  • Risk staff can incorporate the data for more thorough and accurate risk assessments of the enterprise and its supply chain. 
  • The information can also help quantify the risks the policyholder faces for more accurate cyber coverage and premium prices. 
  • And being able to see where attackers are trying to enter and how, and whether they’re successful, can provide insight into the effectiveness of your security awareness policies and program.

Giving diligence its due

Insurers doing due diligence on cybersecurity policy applicants and holders need to make intelligence gathering a priority. The same is true for enterprises.

In the end, the results should be a more cyber-secure world and a stronger cyber-insurance industry. Getting there will take not only a lot of work but also mindset shifts on everyone’s part. The results will be worth it, however: bolstered confidence throughout the industry. Isn’t that the best insurance of all?


Christopher Strand

Profile picture for user ChristopherStrand

Christopher Strand

Christopher Strand is the chief risk and compliance officer at Cybersixgill.

He has spent the last 25 years developing business models and cutting-edge market opportunities within a broad range of IT security businesses. At  Cybersixgill, he is responsible for leading the global security risk and compliance business unit, which helps companies and security executives bridge the gap between cybersecurity and regulatory cyber-compliance.

Previously, Strand served as chief compliance officer at IntSights Cyber Intelligence, where he established the first intelligence-based risk and compliance assessment program. Prior to that, Strand was one of the leaders at Carbon Black (formerly Bit9), where he drove the successful build-out of their cyber-compliance and security division through to their IPO and acquisition by VMWare.

Strand is trained as a security auditor, is a PCIP and participates in the development of cyber regulations globally. He is an active contributor and participant with ISACA, ISSA,  ISC2 and the PCI SSC, frequently speaking and publishing information with a variety of media advocating for the evolution and alignment of compliance and security frameworks.

The Need for Transparency in Underwriting

Open the black box and combine analytics with underwriter expertise to evaluate the computer’s conclusions and where the information comes from.

Computer keyboard

The commercial underwriting game has changed. Data analytics, artificial intelligence and machine learning provide access to more information from a variety of sources, including social media and publicly available databases. But while many technology solutions have been quick to provide answers to underwriters, they’ve been less focused on offering information about how and what data is used to provide conclusions. “The answers are in the algorithm” has been a common explanation about the ways the information is applied, sourced and weighted to answer insurance questions. 

But what happens when you open that black box and combine analytics with underwriter expertise to evaluate the computer’s conclusions and where the information comes from? The answer: You get more accurate, efficient underwriting, better customer service and more business. 

What does transparency look like?

Transparency doesn’t mean sharing the secret sauce or giving everyone access to intellectual property. It does mean pulling back the curtain so those using the solution can easily see where the information is coming from and make decisions with that information in mind. 

Imagine this scenario. A restaurant is looking for insurance. In the application, the restaurant said it does not do deliveries. To speed up the process, the underwriter working on the file uses an AI-enabled solution to comb the internet for information and automatically pull in answers for the underwriting questions. The solution says the restaurant does, in fact, do deliveries. Because the underwriter is strapped for time, they aren’t able to do the manual work to figure out why there is a discrepancy. The underwriter accepts what the solution provided and denies the coverage because that carrier does not insure delivery restaurants. This leads to a long back and forth with the agent and customer to figure out why the coverage was denied, as the customer is adamant that it does not deliver. Not only would the carrier lose a potential client, but it also diminishes the carrier’s relationship with the agent. 

Now consider the same scenario. The only difference is that the underwriter is using a technology solution that’s transparent. The agent can see that there is a discrepancy between what the agent submitted and how the solution answered the question. The underwriter can go directly to the sources where the delivery information was pulled to see that delivery is via a third party, such as Uber Eats. Because the restaurant doesn’t do deliveries itself, its insurance category doesn’t change, and the carrier can write the risk. The underwriter is able to approve the policy without time-consuming work and a long delay between the agent and the customer. 

See also: Eliminating AI Bias in Insurance

Machines don’t replace humans

The value of human expertise is fundamental to the success of the sector. So why are some solutions bypassing professionals’ insurance acumen? Commercial insurance is complicated. There is no one-size-fits-all approach as businesses each have their own particular sets of risks. While AI and data analytics platforms drive underwriter efficiency, it is equally important to provide all of the information they need to easily identify and resolve unique circumstances and make sure customers get the right coverage. 

Advocating for transparency in analytics

Carriers should advocate for transparency. It can save their underwriters significant time and help fuel  business. Here are three reasons why data transparency is critical to insurance buying: 

It builds customer trust. “That is what our underwriting program determined,” is not going to suffice if a customer is wondering why they were denied coverage or their premium ended up being significantly higher than what was quoted. Customers understand businesses use algorithms to speed processes, but they also know the algorithms can be inaccurate. Being able to tell customers exactly where the information comes from and how it is used can help underwriters give clients clear answers to their questions. 

The human-computer combination enables faster interactions. Carriers get the best of both worlds: computer speed coupled with underwriter expertise. Underwriters can make decisions faster. If there is a discrepancy, they can easily resolve it by checking the sources for themselves, eliminating the need for manual search. Finally, underwriters have confidence that the information they are using is accurate. In a non-transparent platform, if an underwriter determines the solution answered a question incorrectly, it could lead them to wonder what else the solution might have gotten wrong. 

It keeps you ahead of the data compliance curve: Consider California’s Consumer Privacy Act or Connecticut’s Personal Data Privacy and Online Monitoring. More and more states are enacting data privacy legislation, while the federal government is also working on passing legislation on its own. Data privacy issues and unintended bias in data analysis are growing concerns in the sector. Are solutions using personal information about a company’s leaders or employees that could infringe on their privacy rights? Are algorithms delivering different results for different demographics of people? Carriers that can show exactly what information was used to underwrite a business can easily appease any regulatory concerns. If there is an issue raised about the fairness of price, for example, the carrier can easily pinpoint the text used to determine the premium and show a regulator that there was no bias or overreaching into employee personal information. 

Data analytics and AI are increasingly becoming table stakes in insurance underwriting. Now the conversation needs to move to transparency. Easily accessing the data sources that solutions use to determine their answers puts the final decision-making into the hands of insurance professionals – where it belongs. Combining technology efficiently with human expertise enables a more efficient and accurate underwriting process to better serve customers and ultimately grow business. 


Prakash Vasant

Profile picture for user PrakashVasant

Prakash Vasant

Prakash Vasant is co-founder and CEO of NeuralMetrics, which provides fast, transparent, accurate and actionable commercial underwriting answers and intelligence for carriers and agencies. The platform leverages AI and natural language processing to analyze unstructured and public data to automate and improve the underwriting process.

He is a serial entrepreneur, launching and scaling successful ventures in finance and technology and leading global teams. Prior to NeuralMetrics, Vasant led a global IT consultancy as well as served in senior positions with an inter-bank currency dealer and a corporate foreign exchange advisory consultancy.