Tag Archives: adverse selection

Disruption of Rate-Modeling Process

How emerging technologies may transform insurance rate modeling

Insurance rate modeling for mass-market consumer products such as P&C, health and life relies heavily on macro risk factors, the “law of large numbers” and building pools of risk. Broadly speaking, outside of specialized lines, relatively little customer-specific data is used in developing rates. Incentives, such as “safe behavior” discounts, are used primarily to encourage good behavior and to help ensure that low-risk prospects do not feel unfairly represented by their premiums. A practical reason for limiting the process to mostly high-level analysis is that large volumes of data are both hard to collect and to analyze on a discrete level. But emerging technologies are starting to remove some of these limitations, potentially creating ways to optimize risk portfolios in consumer-oriented insurance products.

I have written several articles now talking about the potential for the Internet of Things (IoT) in loss prevention and claims facilitation. While much of my focus has been on technologies related to smart homes, arguably more progress has been made in auto telematics and wearables. Data on driving behaviors and personal biometrics of an extraordinary number of people are now being tracked in real time. These data sets may be used to do more than determine the fastest route to work or calculate the remaining target steps you need to take in a day – the data may be a treasure trove of environmental and behavioral information for insurers. Similarly, smart home devices such as connected smoke alarms and leak sensors, along with home security systems, wireless door locks, etc. are beginning to paint a picture of the risk profile in the home at a level never seen before.

But the technology advancements do not stop at the increase in data availability; much of the emerging opportunity has to do with new computing models and “the cloud.” Not long ago, the resources needed to model to an individual rating outweighed the value. But we are now in a world where additional computing resources can be launched with the simple click of a button and disparate databases can easily be joined together for comparison. In other words, the discrete data now exists, and the computing power needed to analyze on an individual level is finally within reach.

See also: How Tech Is Eating the Insurance World  

Tiptoeing in

Recognizing that technology may enable improvement on both sides of the risk pool by potentially better identifying both low- and high-risk candidates, insurers are beginning to evaluate options to model risk on a more discrete level. This enhanced lens on data may be one of the most interesting opportunities in the insurance market to-date. The availability of this data, and the associated computing power to process it, is arguably one of the core pillars of the insurtech revolution – but this discussion is for another article. In the meantime, we are seeing early tests toward enhanced data sets in four key markets: health, life, auto and home.

1) Health and Life – Early tests around wearables conducted by major health and life players seemed more to be assessments around consumer comfort with insurers potentially getting a peak into your lifestyle. For example, there have been several examples of fitness trackers given away as affinity products to members of a plan. Initially, there was broad skepticism that consumers would have interest, recognizing that insurers were testing the waters around one-day having access to more detailed lifestyle data. However, early sentiment proved positive, and the market is now seeing the use of individual diagnostic data expanding in the role of premium calculations. Automated collection of this data is not hard to imagine.

2) Auto – Many auto insurers are exploring real-time driving data analysis along with innovative safe driver rates through OBD data collection – with some starting to require it for certain program participation. Consumers, eager to lower their insurance costs, seem to be more than willing to share how fast they drive or how hard they turn when less expensive rates are in play.

3) Home – It’s easy to see how early wins in health, life and auto may translate into the homeowners market. Already, new smart home rates are entering the market, and in these cases smart home products may “self-verify” their presence, removing doubt of whether a customer truly has safety devices installed in the home. As various IoT devices in the home begin to communicate with one another, the insurer has lots of new data that can be used to adjust risk down to a specific premise.

A Virtuous Circle?

In today’s world of rating, there is an imbalance of information that puts insurers at a disadvantage with insureds. Insureds must represent the value of their property, the current state of the property, the cause of loss when it happens, etc. Generally forced to assume that all statements are true, insurers must price uncertainty into the risk. But moving toward greater data transparency may very well be a win-win for both the insurer and the insured. Low-risk customers may be offered rates more in line with their risk profile. High-risk customers may receive higher premiums, but they may also have clear visibility into the factors affecting their rates and potential corrective actions. Insurers may have less volatility in their portfolio with a better understanding of where the losses may occur. Perhaps this increased data availability will result in lower rates for insureds at maintained or even improved margins for insurers.

But how does the overall market respond with more symmetrical information and greater transparency? More importantly, how do consumers respond when they realize the insurer now knows more specific details about them? What if the rating bar moved from basic personal information, like credit score and claims history, to allowing consumers to opt in for very granular inputs such as: how many steps you took today; whether you sped to work; whether you activated your alarm system before leaving your home? Putting aside the regulatory restrictions, the privacy concerns and the general creepiness of this concept, would consumers be willing to give insurers this very personal data in return for big discounts? If “yes,” would it further ensure good behavior of those that did opt in? Could a “positive self-selection” of sorts start to occur?

In consideration of these potential impacts, there are three economic phenomena that insurers model into rates that may be affected:

1) Adverse selection – People who most need insurance are most likely to buy it, and people less likely to have loss will opt out – e.g., older folks may opt for more health insurance, or safer drivers may choose less coverage than their daredevil counterparts. The bias of high-risk consumers to buy coverage over low-risk consumers results in higher loss ratios and raises premiums of those who participate. But if rates were lowered by removing the risk padding, would lower-risk customers be motivated to participate? Would the risk/reward ratio reach a point where self-insurers feel like the better bet is to participate with the marketplace?

2) Morale hazard – There is risk that insurers bear that insureds, knowing that they have insurance, will be lazy about protecting their belongings. Why lock your doors if insurance would cover a theft? But when behaviors can be monitored, do consumers act differently? Would “safe” people open up data on their personal lives in return for discounts? Perhaps let the insurer know how many nights a week the alarm is armed or the doors are locked for a lowest-rate option?

3) Moral hazard – This phenomena is when insureds take on riskier behavior when coverage is obtained. In other words, a driver who chooses to increase coverage then goes on to take greater driving risks, again, rationalizing the change in behavior as they are “paying for coverage.” Again it’s worth contemplating if behaviors would change by exposing behavioral data.

See also: Embrace Tech Before It Replaces You  

Arguably, through increased transparency, a virtuous circle may be created where better information leads to lower rates. Lower rates drive lower-risk candidates into the market; as more lower-risk candidate participate, losses are lessened, which further drives down rates. Additionally, the lowest-risk candidates are the most likely to participate in high-transparency markets, compounding the loss reduction and further driving down rates. Even better, bad actors who know they may not be able to change their behaviors may opt out.

I recognize I am ignoring huge hurdles for this type of transparency: regulatory constraints, privacy issues, consumer interest, etc., but I do feel strongly that early entrants into these types of products may see very interesting results. Basically, better information becomes the great equalizer…

Conclusion

New, high-resolution data sets along with the computing power needed to make them useful are finally here. While having this added information doesn’t necessarily serve as the silver bullet to perfect rate modeling, it certainly offers insurers an opportunity to refine their analysis and reduce the guesswork. Obviously, the effort to operationalize these new data sets may be significant, and, as noted above, there are certainly consumer and regulatory concerns as this highly personal data is used, but the potential is certainly compelling to consider. At the least, now is the time to start considering where these data sets would be useful as the industry contemplates a move toward highly individualized risk opportunities.

Gravity Is Real — You Can’t Ignore It!

For 10 years, I was an instructor of risk and insurance at LSU in Baton Rouge. I’d occasionally be invited to testify before legislative committees as an insurance expert. Often, some of the pending legislation was designed to solve real problems that were not fixable with insurance. In these cases, my testimony was simple. I’d explain:

“Ladies and gentlemen, today, this legislative body has the ability to outlaw the effects of gravity on all state-owned lands. If this legislation is approved, a citizen can jump off the observation deck on the 27th floor and will die EVEN THOUGH IT IS AGAINST THE LAW. Gravity is unforgiving like that.”

The first three examples below are real and, unfortunately, not sustainable in a long-term insurance model because we can’t ignore adverse selection any more than we can ignore gravity. The fourth item is a “scientist” moving from the facts – and becoming (in my opinion) a social engineer acting on feelings.

Consider the brief notes below as an introduction to each issue, not a complete discussion.

1. The Affordable Care Act – I’ll remove the emotion of illness and fairness from this discussion and just look at the numbers. From a Jan. 13, 2017, Wall Street Journal article, see the following statistics:

  • The most expensive 5% of patients use 49% of health spending.
  • The most expensive 20% of patients use 82% of health spending.
  • The healthiest 50% of patients use only 3% of health spending.

Ours is a house divided. 50% of the market is perfect for an insurance model — the other 50% is not, because insurance works when there is a “chance of loss,” not when losses are certain. In a loss-certain model, the No. 1 need is funding — more and more money.

See also: U.S. Healthcare: No Simple Insurtech Fix

2. NFIP (National Flood Insurance Program) – From the Acadiana Advocate (Jan. 26, 2018) see the following headlines: “Hopes for flood insurance deal dim – Another short-term extension expected”

The future of NFIP is threatened by adverse selection. A disproportional number of high-risk buyers populate the pool, and an insufficient number of safe buyers (low-risk properties) exist to assure affordability and thus sustainability.

3. Auto insurance (issues of tort) – In the late 1970s in Louisiana, mandatory auto liability insurance became the law of the land. We can debate the wisdom or appropriateness of this, but it is the law. Today, ours is a house divided – those looking to sue and those fearful of being sued.

Often, our industry invites (and sometimes deserves) lawsuits by being inefficient or ineffective or unreasonable in claims handling. In other cases, lawyers are searching for incidents and accidents that can do more than indemnify a claimant for a loss by creating wealth or at least “over-indemnification” through the courtroom. Our industry is becoming a tort roulette wheel.

On a 140-mile trip from New Iberia to Baton Rouge, I counted 33 billboards for a specific attorney. There were many more for many others. Is this a cost the market is willing and able to pay? How many millions (billions) of dollars are taken out of the risk pool annually for over-litigation? Are we, the premium payers, willing to pay that cost?

4. Fairness in lieu of actuarial science – At its simplest, the insurance process includes four elements. Do these effectively, and you have a green and sustainable business model:

  • Identify the risk to be insured
  • Define the coverages
  • Establish a price (premium)
  • Pay the claims

On Saturday, Jan. 29, 2018, I was driving down a flooded Center Street in New Iberia concerned about the aforementioned flood article and the viability of the NFIP, when I heard a brief portion of a TED talk with Cathy O’Neil titled, “The Era of Blind Faith in Big Data Must End.” O’Neil, a data scientist with a PhD., talked about data being accurate but not being fair.

Actuarial science demands objective data, but our society is starting to demand “fair.” Can these co-exist? Should bad drivers pay more than good drivers? Should health conditions be considered in underwriting life and health policies?

See also: How Advisers Can Save Healthcare  

I believe insurance is a risk-sharing process requiring underwriting, but it is rapidly moving to a “social welfare” platform. The market will get what it wants or tolerates, but as shown above our traditional insurance model may be sacrificed in the process.

What does this mean in your world? Is it sustainable? What are we as an industry and a society going to do? Address the problems now or wait until these systems collapse or go bankrupt?

“A government that robs Peter to pay Paul can always count on the support of Paul.” — George Bernard Shaw

“We have met the enemy, and he is us” — Pogo comic strip

Lemonade: Chronicle for 2017

In late 2016, we were as nervous as could be. We were about to launch a challenge to a $3 trillion industry, and it was anyone’s guess how we’d be received.

Within hours, feedback from users and influencers allayed our worst fears.

But as the hours turned to days, and days to weeks, questions remained. A full year’s worth of data now offers some answers, and what follows are the highlights, and lowlights, of 2017.

1. “Nobody will trust a company called ‘Lemonade’!”

A major early question was whether a newborn company, with a juvenile name, could engender the necessary trust. Everything was riding on our contrarian theory: that Lemonade’s newness and uniqueness would make it more trustworthy, not less.

You see, traditional insurers often equate trustworthiness with financial strength, which they project by erecting monumental buildings that dominate the skyline.

Skyscrapers weren’t within our budget, but in any event we believed such extravagance sends the wrong signal. People worry their insurer lacks the will to pay, not the means. So we established Lemonade as a public benefit corporation, with a view to signaling something very different. We hoped today’s consumers would find our approach refreshing and trustworthy.

The data suggest that they have.

Since Jan 1. 2017, Lemonade insured more than 100,000 homes, with our members entrusting us to insure them against more than $15 billion of losses.

See also: Lemonade’s Latest Chronicle  

Our total sales for 2017 topped $10 million, with ~5% of our sales materializing in the first quarter, and more than 50% in the fourth. This means our sales are on a strong and exponential growth curve.

On launch day, we thought of our team as pioneers and true believers. But after our first year, we know it is our community of more than 100,000 who deserve those accolades. It is they who entrusted billions to a brand new insurer, and it is that trust that is powering the change.

Which brings us to the second thing we now know. We know our customers.

2. “Being the cheapest attracts customers — but the wrong customers”

The boogeyman in insurance is adverse selection. As an insurer, you set your price based on what a customer should cost you on average. But if, instead of attracting average customers, you attract the kind who switch frequently, or claim excessively, you’re selling at a loss, and your days are numbered. Adverse selection is a particular threat to price leaders.

And we were determined to be a price leader.

But while we designed our business for value, we also designed it for values – and it was important to us that our customers appreciate both. Value alone selects adversely, but values select advantageously.

We breathed a sigh of relief when customers tweeted about Lemonade’s low prices a lot, but about its B-Corp and Giveback even more. The tweeting was an encouraging early data point.

As more data came in during the course of the year, our assessment of the adverse selection threat became more rigorous. See, throughout the many decades, the insurance industry has learned that people’s education and job are highly predictive of what kind of risk they represent. If Lemonade’s customers were below average by these measures, we’d have a problem, no matter what our Twitter feed said.

Good news: They are not.

The stats on Lemonade customers (who, by the way, are 50:50 male and female) suggest our members are more than 100% over-indexed for both graduate degrees and really high-paying jobs. All this notwithstanding the fact that 75% of our members are under the age of 35!

The upshot: Lemonade is attracting the next generation of outstanding insurance customers.

3. “Making claims easy will lead to a flood of claims”

It’s an open secret in the insurance industry that a painful claims process discourages claims. There’s only so many times you can hear that “your call is important to us and will be answered in the order in which it was received,” before you say “to hell with it” and give up on your claim.

Instant claims? That could unleash a torrent of frivolous claims.

Truth be told, things were hairy for a while. Early in 2017, a couple of large claims arrived in rapid succession. We only had a few customers at that time, and as a proportion of our revenue (known as a “loss ratio”) these few claims were daunting. Statistics taught us to expect this kind of lumpiness in the early days, but we still slept fitfully until our business grew and our loss ratio began to normalize. We were in a much healthier place by year’s end (we report our 2017 loss ratio to regulators next month), and the frequency of claims is in line with our modeling.

Beyond the noisiness that is a byproduct of small numbers, our system seemed to have improved as we fed it more data. For example, our loss ratio among policies sold in 2016 is more than 2X that of policies sold in 2017. This suggests that our underwriting was pretty shoddy in our early days. Definitely a lowlight.

Since then, we’ve taught our systems to be far more careful when underwriting policies, and our bot Maya declined to quote more than $17 million of business in 2017. This has markedly improved the underlying health of our business – but there’s still a ways to go. Early mistakes will continue to drag down our reported loss ratio for awhile.

Our knight in shining armor? That’d have to be our claims bot, Jim. When we announced his ability to review, approve and pay a claim in seconds, we surprised a few. Happy to report that, during 2017, AI Jim grew his capacity to pay claims 40X.

Our algorithms are getting better at flagging attempts at fraud, and we reported several of these to the authorities. Yet overall the data shows that honesty is rampant among our members, and what behavioral economists dub reciprocity is alive and well: About 5% of our customers contact us, after their claim is paid, to say their stuff turned up and they want to return the money. Our team has centuries of combined experience in insurance, but this was a first for them all!

A quick look at the instant claims suggests our members spend a lot of time on phones and bikes. But this year had all kinds of losses: big ones like fires and smaller ones like stolen headphones.

We are proud to say that we were (and are!) there for our community in times of need.

Positive reviews of Lemonade’s instant claims

Stopping to smell the roses

2017 wasn’t all roses. We saw shockingly high loss ratios in the first half of the year, some vicious responses to our stand on guns and knock-off attempts by some of the Goliaths of the industry.

At the same time, we saw tremendous adoption by our customers, exciting advances in our tech, licenses from 25 states and a Giveback that amounted to 10% of our revenues.

We’re extremely grateful to our team, our customers and our regulators for making 2017 all that it could be. No doubt 2018 won’t be all roses, either, but we will stop to smell them whenever we can. ?

‘Close Enough’ Isn’t Good Enough

Insurers stake their businesses on their ability to accurately price risk when writing policies. For some, faith in their pricing is a point of pride. Take Progressive. The auto insurer is so confident in the accuracy of its pricing that it facilitates comparison shopping for potential customers—making the bet it can afford to lose a policy that another insurer has underpriced, effectively passing off riskier customers to someone else’s business.

There are a number of data points that go into calculating the premium of a typical home or auto insurance policy: the claim history or driving record of the insured; whether there is a security system like a smoke or burglar alarm installed; the make, model and year of the car or construction of the home. Another contributing factor, of course, is location, whether it’s due to an area’s vehicle density or crime statistics or distance of homes from a coastline. Insurers pay close attention to location for these reasons, but the current industry standard methods for determining a location—whether by zip code or street segment data—often substitutes an estimated location for the actual location. In many cases, the gap between the estimated and actual location is small enough to be insignificant, but where it’s not, there’s room for error—and that error can be costly.

Studies conducted by Perr&Knight for Pitney Bowes looked into the gap between the generally used estimated location and a more accurate method for insurers, to find out what impact the difference had on policy premium pricing. The studies found that around 5% of homeowner policies and a portion of auto policies—as many as 10% when looking at zip-code level data—could be priced incorrectly because of imprecise location data. Crucially, the research discovered that the range of incorrect pricing—in both under- and overpriced premiums—could vary significantly. And that opens insurers up to adverse selection, in which they lose less-risky business to better-priced competitors and attract riskier policies with their own underpricing.

Essentially, this report discusses why a “close enough is good enough” approach to location in premium pricing overlooks the importance of accuracy—and opens insurers to underpricing risk and adverse selection.

The first part of this paper discusses the business case for hyper-accurate location data in insurance, before going into more detail on the Perr&Knight research and the implications of its findings, as well as considerations when improving location data. It concludes with a few key takeaways for insurers going forward. We hope you find it constructive and a good starting point for your own discussions.

The Business Case for Better Location Data

Precise location data helps insurers realize increased profits by minimizing risk in underwriting, thereby reducing underpricing in policies. These factors work together to improve the overall health of the insurer’s portfolio.

“The basic, common sense principle is that it’s really hard to determine the risk on a property you’re insuring if you don’t know where that is,” says Mike Hofert, managing director of insurance solutions at Pitney Bowes. “Really, the key question is, how precisely do you need to know where it is? If you’re within a few miles, is that close enough?”

While most of the time, Hofert says, the answer might be yes—especially for homes in major hurricane, landslide or wildfire zones, because those homes all have a similar location-based risk profile—it’s not always the case. Where it’s not, imprecise location data can have costly consequences. “There are instances where being off by a little bit geographically turns into a big dollar impact,” he says.

See also: Competing in an Age of Data Symmetry  

Currently, industry standard location data for homeowner policies rely typically on interpolated street data. That means that streets will be split into segments of varying length, and homes within that segment are priced at the same risk. However, explains Jay Gentry, insurance practice director at Pitney Bowes, the more precise method is to use latitude and longitude measured in the center of the parcel, where the house is. That can be a difference of a few feet from the segment, or it can be a difference of 500 feet, a mile or more. “It just depends on how good the [segment] data is,” Gentry says.

And that flows into pricing, because when underwriters can more accurately assess the risk of a location—whether it’s where a home is located or where a car is garaged—policies can be priced according to the risk that location actually represents.

It’s tempting to look at the portion of underpriced policies and assume that they’re zeroed out by the overpriced policies an insurer is carrying, but Gentry says that’s the wrong way to look at it—it’s not a “zero sum” game. “If you really start peeling back the layers on that, the issue is that—over a period of time—it rots out the validity of the business,” he says. “If you have an over- and underpriced scenario, the chances are that you’re going to write a lot more underpriced business.”

A key point here is reducing underpricing, because, when the underlying data leads to policies that are priced at a lower rate than they should be, not only does it open an insurer up to paying out on a policy it hasn’t received adequate premiums for, but underpriced policies may also end up constituting a larger and larger portion of the overall book. This is essentially adverse selection.

Michael Reilly, managing director at Accenture, explains that if the underlying pricing assumptions are off, then a certain percentage of new policies will be mispriced, whether at too high or too low a rate. “The ones that are overpriced, I’m not going to get,” he says, explaining that the overpriced submissions will find an insurer that more accurately prices at a lower rate. “The ones that are underpriced, I’m going to continue to get and so, over time, I am continuing to make my book worse,” he says. “Because I’m against competitors who know how that [policy] should be priced correctly, my book will start to erode.”

And, if that policy is seriously underpriced, losses could easily outweigh all else. Gentry recalls the example of an insurer covering a restaurant destroyed in the Tennessee wildfires in 2016, which it had underpriced due to an inaccurate understanding of that location’s susceptibility to wildfire. “The entire block was wiped out by the wildfire, and [the insurer] had a $9 million claim that they will never recoup the loss on, based upon the premiums.”

The Value of Precision

Perr&Knight is an actuarial consulting and insurance operations solutions firm, assisting insurers with a range of activities including systems and data reporting, product development and regulatory compliance. It also commonly carries out research in the insurance space, and Pitney Bowes contracted it to conduct a comparison of home and auto policy pricing with industry-standard location data and its Master Location Data set. We spoke with principal and consulting actuary Dee Dee Mays to understand how the research was conducted and what it found. The following conversation has been edited for clarity and length:

How was each study carried out and what kinds of things were you looking to find?

On the homeowners’ side, we looked at the geo-coding application versus the master location data application. And on the personal auto side, we looked at three older versions that are called INT, Zip4 and Zip5, and we compared those results with the master location data result.

In both cases, we selected one insurance company in one state—a large writer—and had Pitney Bowes provide us with all of the locations in the state. For homeowners, they provided us with a database of single-family, detached home addresses and which territory each geo-coding application would put the address in. They provided us with that database and then we calculated what the premiums would be based on those results and how different they would be, given the different territory that was defined.

For both cases, we picked a typical policy, and we used that one policy to say, “Okay, if that policy was written for all these different houses, or for a vehicle with all these different addresses, how much would the premium differ for that one policy under the various systems?”

And what did you find?

What we found [for homeowners] was there were 5.7% that had a change in territory. So, almost 94% had no change under the two systems. It’s coming down to the 5% that do change.

I think that what is more telling is the range of changes. The premium could, under the master location data, either go up 87%, or it could go down 46%. You can see that there’s a big possibility for a big change in premiums, and I would say that the key is, if your premium is not priced correctly, if your price is too high compared with what an accurate location would give you, you are probably not going to write that risk. [If] someone else was able to write it with a more accurate location and charge a lower premium, the policyholder would say, “Well, I want to go with this lower premium.”

See also: Location, Location, Location – It Matters in Insurance, Too

So, you’re not going to get the premium that’s too high, but if you’re using inaccurate location and you come up with a lower premium than an insurer that was using accurate location, you are more likely to write that policyholder.

The studies were conducted based on policies for homeowners in Florida and vehicle owners in Ohio; so what kind of conclusions can we draw about policies in other states?

I think it really depends on what the individual insurance company is using to price its policies. One [example] is that it’s now more common in states like California, Arizona, even Nevada, for companies to have wildfire surcharges—and they determine that based on the location of the property. So it’s definitely applicable in another state like that, because any time you’re using location to determine where the property is and you have rating factors based on the location, you have the potential that more-accurate data will give you a better price for the risk that you’re taking.

Putting a Plan in Place

Michael Reilly works at Accenture with the insurance industry and advises on underwriting regarding pricing efficiencies; he also works with Pitney Bowes to educate insurers about location data and its potential to affect accuracy of premium pricing. We talked to him about Perr&Knight’s findings and the impact that more precise location data can have on pricing. The following conversation has been edited for clarity and length:

Given the finding that more than 5% of policies can be priced incorrectly due to location, what’s the potential business impact for insurers?

It’s a very powerful element in the industry when your pricing is more accurate, when you know that you’ve priced appropriately for the risk that you have. And when there’s this leakage that’s in here, you’ve got to recognize that the leakage isn’t just affecting the 5% to 6% of policies. That leakage, where they’re underpriced, has to be made up from an actuarial discipline. So that underwriting leakage is actually spread as a few more dollars on every other policy that’s in the account. That jacks up all their pricing just a little bit, and it makes them a little bit less competitive. If their pricing is more accurate, that improves the overall quality of their book and improves their ability to offer better pricing throughout their book.

What are some of the reasons insurers have been slow to act on improving location data?

I think it’s coming from multiple elements. With anything like this, it’s not always a simple thing. One thing is, there are carriers that don’t realize it, don’t realize there is an opportunity for better location [data] and how much that better location [could] actually contribute to their pricing. The second is—and part of the reason there’s a lack-of-awareness issue—is that the lack of awareness is twofold, because it’s also a lack of awareness by the business. Typically, data purchases are handled either by procurement or by IT, and the business doesn’t think about the imprecisions they have in their data. They just trust that the data they get from their geolocation vendor is good, and they move on with life.

The other piece about this is the fact that replacing a geospatial location is not [a matter of] taking one vendor [out] and plugging in a new one, right? We do have all these policies that are on the books, and I’ve got to figure out how do I handle that pricing disruption so I don’t lose customers that are underpriced. I want to manage them through it. I need to look at how I’m pricing out, and actually look it up and look in my file. Do I have to refile because I have a change in rate structure, or does my filing cover the fact that I replaced it with an accurate system? So, I need to look at a couple different things in order to get to where I’d be in the right price.

And then, quite frankly, once they open the covers of this, it also starts to raise other questions of, “Oh, wait a second.” If this data element is wrong or this data element can be better, which other data elements can be improved; or what new data elements can be considered? Could the fire protection score be changed, or average drive speed be used? That’s why we’re starting to talk to carriers and say we might as well look for the other areas of opportunity, as well, because we probably have more leakage from just this. This is the tip. It’s very easily identifiable, very easily measurable, but it’s probably not the only source of leakage within your current pricing.

See also: 10 Trends on Big Data, Advanced Analytics  

What we’re trying to help [insurers] do is say, look, if you’re going to purchase this new data, let’s make sure that we have a plan on how we’re going to get in and start to achieve the value relatively quickly. In most cases, if it’s a decent-sized carrier, we know they’re issuing X number of wrong quotes per day because of not having the right location information. So how do we fix this as fast as possible, so we’re not continuing to make the problem worse?

And when you say realizing value quickly, what would be a typical timeline?

There are a couple of elements that will come into play. If someone has to do a refiling, the refiling itself will take a period of time. Assuming they don’t have to do a refiling—and not in all cases will they need to—and depending upon their technology, if they can immediately switch geolocations for new business post-renewals, then you can do that in a very, very short window. At least start to make sure that all new quotes are priced correctly.

Then the question comes in as to how do you want to handle renewals? Whether you want to spread the pricing increase over one year or two years or along those lines? That usually takes a little bit more time to implement within a system, but probably not a significantly long period of time—only a couple of months and then a year to run through your entire book to fully realize the value. Now, if you have to do a filing, all that could be delayed by X number of months.

Key Considerations

Given that location has a material impact on premium pricing, the onus is on insurers to have the most accurate location data available. Those that do will have a competitive advantage over those that don’t. Keep in mind the following considerations:

  • “Close enough” is not always good enough. Even though location is close enough most of the time, imprecision can have big costs when it masks proximity to hazards.
  • The portion of policies affected may be small, but it can have big cost impacts. The range of under- and overpricing varied widely, with some premium pricing off by more than $2,000. And, as Michael Reilly points out, the impact of underwriting leakage is actuarially spread across the entire portfolio, making premiums incrementally less competitive.
  • Underpricing is not “zeroed out” by overpricing. In fact, underpricing opens insurers to adverse selection, in which overpriced policies are lost to more accurately priced competitors, and underpriced policies make up a greater proportion of the business.
  • Time to value can be quick – and new ratings filings are not always needed.

You can download the full report here.

3 Warning Signs of Adverse Selection

The top 25 insurers consume 70% of the market share in workers’ compensation, and, as the adoption of data and predictive analytics continues to grow in the insurance industry, so does the divide between insurers with competitive advantage and those without it. One of the largest outcomes of this analytics revolution is the increasing threat of adverse selection, which occurs when a competitor undercuts the incumbent’s pricing on the best risks and avoids writing poor performing risks at inadequate prices.

Every commercial lines carrier faces it, whether it knows it or not. A relative few are actively using adverse selection offensively to carve out new market opportunities from less sophisticated opponents. An equally small crowd knows that they are the unwilling victims of adverse selection, with competitors currently replacing their best long-term risks with a bunch of poor-performing accounts.

It’s the much larger middle group that’s in real trouble — those that are having their lunch quietly stolen each and every day, without even realizing it.

Three Warning Signs of Adverse Selection
Adverse selection is a particularly dangerous threat because it is deadly to a portfolio yet only recognizable after the damage has been done. However, there are specific warning signs to look out for that indicate your company is vulnerable:

  1. Loss Ratios and Loss Costs Climb – When portfolio loss ratios are climbing, it is easy to blame market conditions and the competition’s “irrational pricing.” If you or your colleagues are talking about the crazy pricing from the competition, it could be a sign that your competitor has better information to assess the same risks. For example, in 2009, Travelers Insurance, known to be utilizing predictive analytics for pricing, had a combined ratio of 89% while all of P&C had a combined ratio of 101%.
  2. Rates Go Up, and Volumes Declines – As loss ratios increase along with losses per earned exposure, the actuarial case emerges: Manual rates are inadequate to cover expected future costs. In this situation, tension grows among the chief decision makers. Raising rates will put policy retention and volumes at risk, but failing to raise rates will cut deeply into portfolio profitability. Often in the early stages of this warning sign, insurers opt to raise rates, which makes it tougher on both acquisition and retention. After another policy cycle, there is often a lurking surprise: The actuary will find that the rate increase was insufficient to cover the higher projected future losses. At this point, adversely selected insurers raise rates again (assuming their competitors are doing the same). The cycle repeats, and adverse selection has taken hold.
  3. Reserves Become Inadequate – When actuaries express signs of mild reserve inadequacy, the claims department often argues that reserving practices haven’t changed, but their loss frequency and severity have increased. This leads to major decreases in return on assets (ROA) and forces insurers to downsize and focus on a niche specialization to survive, with little hope of future growth. The fundamental problem leading to this occurrence is that the insurer cannot identify and price risk with the accuracy that competitors can.

Predictive Analytics Evens the Playing Field
The easiest way to prevent your business from being adversely selected is starting with the foundation of your risk management — the underwriting. Traditional insurance companies rely only on their own data to price risks, but more analytically driven companies are using a diversified set of data to prevent sample bias.

For small to mid-sized businesses that can’t afford to build out their internal data assets, there are third-party sources and solutions that can provide underwriters with the insight to make quicker and smarter pricing decisions. Having access to large quantities of granular data allows insurers to assess risk more accurately and win the right business for the best price while avoiding bad business.

Additionally, insurers are using predictive analytics to expand their scope of influence in insurance. With market share consolidation on the rise, insurers in niche markets of workers’ compensation face even more pressure of not only protecting their current business, but also achieving the confidence to underwrite risks in new markets to expand their book of business. According to a recent Accenture survey, 72% of insurers are struggling with maintaining underwriting and pricing discipline. The trouble will only increase as insurers attempt to expand into new territories without the wealth of data needed to write these new risks appropriately. The market will divide into companies that use predictive models to price risks more accurately and those that do not.

At the very foundation of any adversely selected insurer is the inability to price new and renewal business accurately. Overhauling your entire enterprise overnight to be data-driven and equipped to utilize advanced data analytics is an unreasonable goal. However, beginning with a specific segment of your business is not only reasonable but will help you fight against adverse selection and lower loss ratio.

This article first appeared on wci360.com.