Download

Tough Questions for Agencies

As John F. Kennedy said, "There are risks and costs to action. But they are far less than the long-range risks of comfortable inaction.”

It was the ARM Partners Conference in New Orleans on April 18, 2012. I was to speak on change. The attendees, many with bloodshot eyes, were slowly filling the room. The program was the first of the morning. Slow and bloodshot are part of the culture of early a.m. in The Big Easy. I placed a trash can in front of the group and a bottle of baby aspirins on the podium. I explained that “my intention today is to create chest pains, because chest pains change behavior. If the chest pains get too serious, take a baby aspirin and place it under your tongue. If I upset your already queasy stomach, you can throw up in the garbage can.” Nervous laughter followed. An early slide included two quotes. The first: “Fat, dumb, and happy, commercial banks are being quickly replaced as financial intermediaries.” (Time magazine, June 28, 1993, Bernard Baurnohl). Agencies, not just bankers, needed that warning. The second quote was from Peter Drucker: “Whom the gods wish to destroy, they send 40 years of success.” That one was because recurring revenue from renewals makes many agents too comfortable. As John F. Kennedy said, "There are risks and costs to action. But they are far less than the long-range risks of comfortable inaction.” See also: Are You a Manager or a Leader?   How would your agency look if your marketing and sales were audited to see how well you were taking advantage of your opportunities? Is your organization about performance, sales, marketing, customer intimacy OR the daily transactions and the comfort of your staff and yourself? Auditors are tough: One with the Centers for Disease Control and Prevention in Atlanta said, “When the war is over, the auditor steps onto the battlefield and bayonets the survivors.” Is your agency and your team bruised and bloodied from battles of yesterday or up and running forward into the future? Will the marketplace, the ultimate arbiter of success, bayonet you or reward you? Are you the past or the future? Max DePree says, “The first role of the leader is to define reality.” The following questions may help you begin to define your starting point for tomorrow:
  1. Do you and your team share understanding of and commit to the vision, values, mission and objectives established for your future? Will each of you and all of you be accountable for your performance and results? Are these your X commandments or X suggestions? Are these right for the world as it is and as it will be?
  2. Is the marketplace you serve or hope to serve in decline, level or in ascendancy? If your answer is in decline or “flat lining,” can you find new products to offer your existing clients? Can you offer your existing products to new clients or, even better, can you offer new products (services) to new clients?
  3. Is your team compatible with the market niches you serve? If you are blessed with some really experienced and wise baby boomers, will they be right for the Gen X and Gen Y that is your tomorrow? Will your English-speaking producers be right for a Laotian population? Will your clients shop producers based on their knowledge or their cultural/gender compatibility?
  4. How will you sell in a non-verbal world? Is your delivery process (sales and service) of choice the preference of your clients and prospective clients? Are they comfortable with what and how you do business? Are you comfortable with what and how they want the relationship to be? CAN YOU ADAPT TO THEIR FUTURE?
  5. What products, important today, might not be available tomorrow for you to sell? Is the National Flood Insurance Program sustainable, for instance, or will its vulnerability to adverse selection ultimately cause it to collapse? Will auto liability coverage be needed with self-driving cars? Will Gen Ys prefer private ownership of cars or Uber or public transportation? Will they have the appetite for home ownership that we had? Will your community survive? Will coastal properties be readily available, or will global warming have moved them all off of the coast?
  6. What new opportunities might be available to you that are not in your "briefcase" today?
  7. Will the advances in technology allow you to do more with your clients and prospects more efficiently/effectively? In a virtual world, might 7.5% commission be adequate where today you are blessed with 12%? Who will dictate commission levels in the future – you or your clients? Will carriers determine your commissions on what you need or what the market is willing to pay? Could you sell effectively with full disclosure of commission or quotes net of commission?
  8. What will the world of retail - malls and Main Street -- be like tomorrow? Will all the action be on the banks of the Amazon?
  9. Will the government finally move to a single payer healthcare system? Will your local doctors now satisfy their needs through their network versus as individual business owners? Will they be entrepreneurs or employees? Will they be in the business of business and the business of medicine, or will they specialize in only medicine?
  10. In the future must you be “too big to fail,” or will you be too small to succeed?
I don’t know the answers. I don’t even know the questions that are appropriate for tomorrow. Your future doesn't depend on me. It depends on you. What do you know? What should you know? What will you do? Can you be profitable regardless of what the market is willing to pay? See also: 5 Transformational Changes for Clients   About 20 years ago, I was speaking to an agency conference and talked with one of the attendees. He was over 75, very traditional, successful, conservative and very comfortable in his ways. I asked if his exit from his agency by death or retirement would increase or decrease the value of his agency. His response was immediate, “Boy, you done gone from preaching to meddling.” I now offer you the same question – are you and the agency you own or work with ready, willing and able to move from yesterday and today into tomorrow? REALLY??? It’s your future.

Mike Manes

Profile picture for user mikemanes

Mike Manes

Mike Manes was branded by Jack Burke as a “Cajun Philosopher.” He self-defines as a storyteller – “a guy with some brain tissue and much more scar tissue.” His organizational and life mantra is Carpe Mañana.

How AI Can Vanquish Bias

sixthings

This article by Lemonade co-founder and CEO Daniel Schreiber tackles a profound issue for insurance and offers an innovative solution. The article suggests a smart way to watch for bias hidden in algorithms and to correct for it. In the process, Daniel provides an opening toward a holy grail: being able to price risk accurately for each individual.

The article is well worth your time. We're delighted to be able to share it with you and hope you'll share it, too. The change will require support not just from incumbents and insurtechs but also from regulators, whose structures, as Daniel notes, are reasonably friendly in Europe but would require more adaptation in the U.S.

I won't describe in any detail what Daniel calls his "uniform loss ratio" test, which makes sure that AI-based pricing for individuals produces defensible results for every group when losses are measured against pricing at the group level. But I want to build on his proposed test and explore the implications for how we'll all need to adapt to a world of much more individualized pricing of risk.

First, consider the technical requirements that must be met. Specifically, the data requirements will necessitate a continuous re-examination of privacy issues. The industry is already facing legislation designed to prevent an insurer's access to specific, individual data. A few in the public policy sector have taken this to an extreme by introducing legislation that would deny consumers even the option to voluntarily share data with their insurer for their own benefit. 

Second, the more data that is aggregated by any organization, the more it becomes a target for bad actors. While all insurers ferociously protect their customers' data, the convergence of the required new computational capabilities and vast array of data raises the bar on cyber security significantly. 

Third, basing premiums on an individual's risk profile will intensify the spotlight on operational expenses. As insurers zero in on an individual's risk, that individual will have more transparency about the process and will tend to sign on with whatever insurer can cover his or her risk at lowest cost.

Fourth, how will customers react? The move to individualized pricing creates huge opportunities for innovation, but consumers need to participate in the development. Would we not want consumers to have a choice between traditional, segmented, pricing and the new, individual pricing?

The benefits of individualized pricing are clear. If we can be sure to avoid bias, we can take advantage of the full array of capabilities of artificial intelligence. And the "uniform loss ratio" test can get rid of the "ghosts in the machine": biases that are unintentional but that are currently unrecognized and unavoidable given the limitations of our data and computational capabilities. We can then democratize access to services and products and accelerate the move away from ratings and recovery and toward preventing risks.

The journey from here to there:

  • Will require a substantial collection of innovations,
  • Will increase the clarity and urgency of certain issues.
  • Rightly will drive a stake through the heart of discrimination,
  • Represents an abundancy of opportunity,
  • Is, let’s face it, inevitable.

Might as well get moving, right?

Regards,

Guy Fraker
Chief Innovation Officer


Insurance Thought Leadership

Profile picture for user Insurance Thought Leadership

Insurance Thought Leadership

Insurance Thought Leadership (ITL) delivers engaging, informative articles from our global network of thought leaders and decision makers. Their insights are transforming the insurance and risk management marketplace through knowledge sharing, big ideas on a wide variety of topics, and lessons learned through real-life applications of innovative technology.

We also connect our network of authors and readers in ways that help them uncover opportunities and that lead to innovation and strategic advantage.

How AI Can Vanquish Bias

sixthings

This article by Lemonade co-founder and CEO Daniel Schreiber tackles a profound issue for insurance and offers an innovative solution. The article suggests a smart way to watch for bias hidden in algorithms and to correct for it. In the process, Daniel provides an opening toward a holy grail: being able to price risk accurately for each individual.

The article is well worth your time. We're delighted to be able to share it with you and hope you'll share it, too. The change will require support not just from incumbents and insurtechs but also from regulators, whose structures, as Daniel notes, are reasonably friendly in Europe but would require more adaptation in the U.S.

I won't describe in any detail what Daniel calls his "uniform loss ratio" test, which makes sure that AI-based pricing for individuals produces defensible results for every group when losses are measured against pricing at the group level. But I want to build on his proposed test and explore the implications for how we'll all need to adapt to a world of much more individualized pricing of risk.

First, consider the technical requirements that must be met. Specifically, the data requirements will necessitate a continuous re-examination of privacy issues. The industry is already facing legislation designed to prevent an insurer's access to specific, individual data. A few in the public policy sector have taken this to an extreme by introducing legislation that would deny consumers even the option to voluntarily share data with their insurer for their own benefit. 

Second, the more data that is aggregated by any organization, the more it becomes a target for bad actors. While all insurers ferociously protect their customers' data, the convergence of the required new computational capabilities and vast array of data raises the bar on cyber security significantly. 

Third, basing premiums on an individual's risk profile will intensify the spotlight on operational expenses. As insurers zero in on an individual's risk, that individual will have more transparency about the process and will tend to sign on with whatever insurer can cover his or her risk at lowest cost.

Fourth, how will customers react? The move to individualized pricing creates huge opportunities for innovation, but consumers need to participate in the development. Would we not want consumers to have a choice between traditional, segmented, pricing and the new, individual pricing?

The benefits of individualized pricing are clear. If we can be sure to avoid bias, we can take advantage of the full array of capabilities of artificial intelligence. And the "uniform loss ratio" test can get rid of the "ghosts in the machine": biases that are unintentional but that are currently unrecognized and unavoidable given the limitations of our data and computational capabilities. We can then democratize access to services and products and accelerate the move away from ratings and recovery and toward preventing risks.

The journey from here to there:

  • Will require a substantial collection of innovations,
  • Will increase the clarity and urgency of certain issues.
  • Rightly will drive a stake through the heart of discrimination,
  • Represents an abundancy of opportunity,
  • Is, let’s face it, inevitable.

Might as well get moving, right?

Regards,

Guy Fraker
Chief Innovation Officer


Insurance Thought Leadership

Profile picture for user Insurance Thought Leadership

Insurance Thought Leadership

Insurance Thought Leadership (ITL) delivers engaging, informative articles from our global network of thought leaders and decision makers. Their insights are transforming the insurance and risk management marketplace through knowledge sharing, big ideas on a wide variety of topics, and lessons learned through real-life applications of innovative technology.

We also connect our network of authors and readers in ways that help them uncover opportunities and that lead to innovation and strategic advantage.

How AI Can Vanquish Bias

A "uniform loss ratio" test can eliminate bias in underwriting and open the way for truly individualized, AI-driven assessments of risk.

||||||||
Insurance is the business of assessing risks and pricing policies to match. As no two people are entirely alike, that means treating different people differently. But how to segment people without discriminating unfairly? Thankfully, no insurer will ever use membership in a "protected class" (race, gender, religion...) as a pricing factor. It's illegal, unethical and unprofitable. But, while that sounds like the end of the matter, it’s not. Take your garden-variety credit score. Credit scores are derived from objective data that don’t include race and are highly predictive of insurance losses. What’s not to like? Indeed, most regulators allow the use of credit-based insurance scores, and in the U.S. these can affect your premiums by up to 288%. But it turns out there is something not to like: Credit scores are also highly predictive of skin color, acting in effect as a proxy for race. For this reason, California, Massachusetts and Maryland don’t allow insurance pricing based on credit scores. Reasonable people may disagree on whether credit scores discriminate fairly or unfairly—and we can have that debate because we can all get our heads around the question at hand. Credit scores are a three-digit number, derived from a static formula that weighs five self-explanatory factors. But in the era of big data and artificial intelligence, all that could change. AI crushes humans at chess, for example, because it uses algorithms that no human could create, and none fully understand. The AI encodes its own fabulously intricate instructions, using billions of bits of data to train its machine learning engine. Every time it plays (and it plays millions of times a day), the machine learns, and the algorithm morphs. What happens when those capabilities are harnessed for assessing risk and pricing insurance? Many fear that such "black box" systems will make matters worse, producing the kind of proxies for race that credit scores do but without giving us the ability to scrutinize and regulate them. If five factors mimic race unwittingly, some say, imagine how much worse it will be in the era of big data! But, while it's easy to be alarmist, machine learning and big data are more likely to solve the credit score problem than to compound it. You see, problems that arise while using five factors aren’t multiplied by millions of bits of data—the problems are divided by them. To understand why, let's think about the process of using data to segment—or "discriminate"—as evolving in three phases.
Phase 1:
In Phase 1 all people are treated as though they are identical. Everyone represents the same risk and is therefore charged the same premium (per unit of coverage). This was commonplace in insurance until the 18th century. Phase 1 avoids discriminating based on race, ethnicity, gender, religion or anything else for that matter, but that doesn't make it fair, practical or even legal. One problem with Phase 1 is that people who are more thoughtful and careful are made to subsidize those who are more thoughtless and careless. Externalizing the costs of risky behavior doesn’t make for good policy, and isn't fair to those who are stuck with the bill. Besides, people who are better-than-average risks will seek lower prices elsewhere - leaving the insurer with average premiums but riskier-than-average customers (a problem known as "adverse selection"). That doesn't work. Finally, best intentions notwithstanding, Phase 1 fits the legal textbook definition of "unfair discrimination." The law mandates that, subject to "practical limitations," a price is "unfairly discriminatory" if it "fails to reflect with reasonable accuracy the differences in expected losses." In other words, within the confines of what's practical, insurers must charge each person a rate that’s proportionate to the person's risk. Which brings us to Phase 2.
Phase 2:
Phase 2 sees the population divided into subgroups according to their risk profile. This process is data-driven and impartial, yet, as the data are relatively basic, the groupings are relatively crude. Phase 2—broadly speaking—reflects the state of the industry today, and it's far from ideal. Sorting with limited data generates relatively few, large groups—and two big problems. The first is that the groups may serve as proxies that affect protected classes. Take gender. Imagine, if you will, that women are—on average—better risks than men (say the average risk score for a woman is 40, on a 1-100 scale, and is 60 for men). We'd still expect many women to be sub-average risks, and many men to be better than average. So while crude groupings may be statistically sound, Phase 2 might penalize low-risk men by tarring all men with the same brush. The second problem is that—even if the groups don’t represent protected classes—responsible members of the group are still made to pay more (per unit of risk) than their less responsible compatriots. That’s what happens when you impose a uniform rate on a nonuniform group. As we saw, this is the textbook definition of unfair discrimination, which we tolerate as a necessary evil, born of practical limitations' But the practical limitations of yesteryear are crumbling, and there's a four-letter word for a "necessary evil" that is no longer necessary... Which brings us to Phase 3.
Phase 3:
Phase 3 continues where Phase 2 ends: breaking monolithic groups into subgroups. Phase 3 does this on a massive scale, using orders of magnitude more data, which machine learning crunches to produce very complex multivariate risk scores. The upshot is that today's coarse groupings are relentlessly shrunk, until—ultimately—each person is a group of one. A grouping that in Phase 2 might be a proxy for men, and scored as a 60, is now seen as a series of individuals, some with a risk score of 90, others of 30 and so forth. This series still averages a score of 60—but, while that average may be applied to all men in Phase 2, it's applied to none of them in Phase 3. In Phase 3, large groups crumble under the weight of the data and the crushing power of the machine. Insurance remains the business of pooling premiums to pay claims, but now each person contributes to the pool in direct proportion to the risk the person represents—rather than the risk represented by a large group of somewhat similar people. By charging every person the same, per unit of risk, we sidestep the inequity, illegality and moral hazard of charging the careful to pay for the careless, and of grouping people in ways that serve as a proxy for race, gender or religion. It's like we said: Problems that arise while using five factors aren’t multiplied by millions of bits of data—the problems are divided by them. Insurance Can Tame AI It's encouraging to know that Phase 3 has the potential to make insurance fairer, but how can we audit the algorithm to ensure it actually lives up to this promise? There's been some progress toward "explainability" in machine learning, but, without true transparency into that black box, how are we to assess the impartiality of its outputs? By their outcomes. But we must tread gingerly and check our intuitions at the door. It's tempting to say that an algorithm that charges women more than men, or black people more than white people, or Jews more than gentiles is discriminating unfairly. That's the obvious conclusion, the traditional one, and—in Phase 3—it's likely to be the wrong one. Let's say that I am Jewish (I am) and that part of my tradition involves lighting a bunch of candles throughout the year (it does). In our home, we light candles every Friday night and every holiday eve, and we'll burn through about 200 candles over the eight nights of Hanukkah. It would not be surprising if I, and others like me, represented a higher risk of fire than the national average. So, if the AI charges Jews, on average, more than non-Jews for fire insurance, is that unfairly discriminatory? It depends. It would definitely be a problem if being Jewish, per se, resulted in higher premiums whether or not you’re the candle-lighting kind of Jew. Not all Jews are avid candle lighters, and an algorithm that treats all Jews like the "average Jew," would be despicable. That, though, is a Phase 2 problem. A Phase 3 algorithm that identifies people’s proclivity for candle lighting, and charges them more for the risk that this penchant actually represents, is entirely fair. The fact that such a fondness for candles is unevenly distributed in the population, and more highly concentrated among Jews, means that, on average, Jews will pay more. It does not mean that people are charged more for being Jewish. It's hard to overstate the importance of this distinction. All cows have four legs, but not all things with four legs are cows. The upshot is that the mere fact that an algorithm charges Jews—or women, or black people—more on average does not render it unfairly discriminatory. Phase 3 doesn't do averages. In common with Dr. Martin Luther King, we dream of living in a world where we are judged by the content of our character. We want to be assessed as individuals, not by reference to our racial, gender or religious markers. If the AI is treating us all this way, as humans, then it is being fair. If I'm charged more for my candle-lighting habit, that's as it should be, even if the behavior I’m being charged for is disproportionately common among Jews. The AI is responding to my fondness for candles (which is a real risk factor), not to my tribal affiliation (which is not). So if differential pricing isn't proof of unfair pricing, what is? What outcome is the telltale sign of unfair discrimination in Phase 3? Differential loss ratios. The "pure loss ratio" is the ratio of the dollars paid out in claims by the insurance company, to the dollars it collects in premiums. If an insurance company charges all customers a rate proportionate to the risk they pose, this ratio should be constant across their customer base. We'd expect to see fluctuations among individuals, sure, but once we aggregate people into sizable groupings—say by gender, ethnicity or religion—the law of large numbers should kick in, and we should see a consistent loss ratio across such cohorts. If that's the case, that would suggest that even if certain groups—on average—are paying more, these higher rates are fair, because they represent commensurately higher claim payouts. A system is fair—by law—if each of us is paying in direct proportion to the risk we represent. This is what the proposed Uniform Loss Ratio (ULR) test, tests. It puts insurance in the enviable position of being able to keep AI honest with a simple, objective and easily administered test. It is possible, of course, for an insurance company to charge a fair premium but then have a bias when it comes to paying claims. The beauty of the ULR test is that such a bias would be readily exposed. Simply put, if certain groups have a lower loss ratio than the population at large, that would signal that they are being treated unfairly. Their rates are too high, relative to the payout they are receiving. ULR helps us overcome another major concern with AI. Even though machines do not have inherent biases, they can inherit biases. Imagine that the machine finds that people who are arrested are also more likely to be robbed. I have no idea whether this is the case, but it wouldn't be a shocking discovery. Prior run-ins with the police would, in this hypothetical, become a legitimate factor in assessing property-insurance premiums. So far, so objective. The problem arises if some of the arresting officers are themselves biased, leading—for example—to an elevated rate of black people being arrested for no good reason. If that were the case, the rating algorithm would inherit the humans' racial bias: A person wouldn't pay more insurance premiums for being black, per se, but the person would pay more for being arrested—and the likelihood of that happening would be heightened for black people. While my example is hypothetical, the problem is very real. Worried about AI-inherited biases, many people are understandably sounding the retreat. The better response, though, is to sound the advance. You see, machines can overcome the biases that contaminate their training data if they can continuously calibrate their algorithms against unbiased data. In insurance, ULR provides such a true north. Applying the ULR test, the AI would quickly determine that having been arrested isn’t equally predictive of claims across the population. As data accumulate, the "been arrested" group would subdivide, because the AI would detect that for certain people being arrested is less predictive of future claims than it is for others. The algorithm would self-correct, adjusting the weighting of this datum to compensate for human bias. (When a system is accused of bias, the go-to defense runs something like: "But we don't even collect information on gender, race, religion or sexual preference." Such indignation is doubly misplaced. For one, as we've seen, systems can be prejudiced without direct knowledge of these factors. For another, the best way for ULR-calibrated-systems to neutralize bias is to actually know these factors.) Bottom line: Problems that arise while using five factors aren't multiplied by millions of bits of data—the problems are divided by them. The Machines Are Coming. Look Busy. Phase 3 doesn't exist yet, but it's a future we should embrace and prepare for. That requires insurance companies to redesign their customer journey to be entirely digital and reconstitute their systems and processes on an AI substrate. In many jurisdictions, how insurance pricing is regulated also must be rethought. Adopting the ULR test would be a big step forward. In Europe, the regulatory framework could become Phase-3-ready with minor tweaks. In the U.S., filing rates in a simple and static multiplication chart for human review doesn't scale as we move from Phase 2 to 3. At a minimum, regulators should allow these lookup-tables to include a column for a black box "risk factor." The ULR test would ensure these never cause more harm than good, while this additional pricing factor would enable emerging technologies to benefit insurers and insureds alike. Nice to Meet You When we meet someone for the first time, we tend to lump them with others with whom they share surface similarities. It's human nature, and it can be unfair. Once we learn more about that individual, superficial judgments should give way to a merits-based assessment. It's a welcome progression, and it's powered by intelligence and data. What intelligence and data have done for humanity throughout our history, artificial intelligence and big data can start to do for the insurance industry. This is not only increasingly possible as a matter of technology, it is also desirable as a matter of policy. Furthermore, as the change will represent a huge competitive advantage, it is also largely inevitable. Those who fail to embrace the precision underwriting and pricing of Phase 3 will ultimately be adversely selected out of business. Insurance is the business of assessing risks, and pricing policies to match. As no two people are entirely alike, that means treating different people differently. For the first time in history, we’re on the cusp of being able to do precisely that.

Daniel Schreiber

Profile picture for user DanielSchreiber

Daniel Schreiber

Daniel Schreiber is CEO and co-founder at Lemonade, a licensed insurance carrier offering homeowners and renters insurance powered by artificial intelligence and behavioral economics. By replacing brokers and bureaucracy with bots and machine learning, Lemonade promises zero paperwork and instant everything.

3 Big Challenges on the Way to Nirvana

To fulfill insurtech's promise, insurers must get their heads around cognitive computing, big data and data exchange standards.

We hear almost daily how insurtech is disrupting the once-staid insurance industry. The main ingredients are big data, artificial intelligence, social media, chatbots, the Internet of Things and wearables. The industry is responding to changing markets, technology, legislation and new insurance regulation. I believe insurtech is more collaborative than disruptive. There are many ways insurance technology can streamline and improve current processes with digital transformation. Cognitive computing, a technology that is designed to mimic human intelligence, will have an immense impact. The 2016 IBM Institute for Business Value survey revealed that 90% of outperforming insurers say they believe cognitive technologies will have a big effect on their revenue models. The ability of cognitive technologies, including artificial intelligence, to handle structured and unstructured data in meaningful ways will create entirely new business processes and operations. Already, chatbots like Alegeus’s “Emma,” a virtual assistant that can answer questions about FSAs, HSAs and HRAs, and USAA’s “Nina” are at work helping policyholders. These technologies aim to promote not hamper progress, but strategies for assimilating these new “employees” into operations will be essential to their success. Managing the flood of data is another major challenge. Using all sorts of data in new, creative ways underlies insurtech. Big data is enormous and growing in bulk every day. Wearables, for instance, are providing health insurers with valuable data. Insurers will need to adopt best practices to use data for quoting individual and group policies, setting premiums, reducing fraud and targeting key markets. See also: Has a New Insurtech Theme Emerged?   Innovative ways to use data are already transforming the way carriers are doing business. One example is how blocks of group insurance business are rated. Normally, census data for each employee group must be imported by the insurer to rate and quote, but that’s changing. Now, groups of clients can be blocked together based on shared business factors and then rated and quoted by the experience of the group for more accurate and flexible rating. Cognitive computing can also make big data manageable. Ensuring IT goals link back to business strategy will help keep projects focused. But simply getting started is probably the most important thing. With cognitive computing, systems require time to build their capacity to handle scenarios and situations. In essence, systems will have to evolve through learning to a level of intelligence that will support more complex business functions. Establishing effective data exchange standards also remains a big challenge. Data exchange standards should encompass data aggregation, format and translation and frequency of delivery. Without standards, chaos can develop, and costs can ratchet up. Although there has been traction in the property and casualty industry with ACORD standards, data-exchange standards for group insurance have not become universal. See also: Insurtech’s Approach to the Gig Economy   The future is bright for insurers that place value on innovating with digital technologies and define best practices around their use. It’s no longer a matter of when insurance carriers will begin to use cognitive computing, big data and data standards, but how.

Risks, Opportunities in the Next Wave

Climate change, the rise of new ecosystems and operating models and more inclusive insurance are looming.

As insurance executives look out for the industry’s next wave, they will see a paradox of great risk and opportunity. The most serious threats — societal megatrends, disruptive technology advancements and intensifying competition from both new and traditional players — also hold the greatest potential for growth and transformation.

As the strategic evolution of the industry accelerates, the most effective response for insurers is to harness the power of change and thoughtfully design their futures. They must develop their vision for the future and adjust their strategic and tactical plans to realize that vision.

Certainly, these recommendations apply to three of the top issues the industry faces — climate change, the rise of new ecosystems and operating models and more inclusive insurance. These are just a few of the trends and scenarios we explore in our recently released report titled, NextWave Insurance: personal lines and small commercial.

Climate change: Climate change is arguably the biggest challenge facing humanity today. For insurers, it also presents an array of new uncertainties that make pricing risk harder than ever. The potential impact of climate change on the insurance sector is staggeringly large. Just consider these numbers:

  • $219 billion: combined global insurance losses from natural disasters, 2017–18 (Swiss Re)
  • 90%: proportion of natural disaster costs that can be attributed to weather-related events in an average year (Munich Re)
  • Five times: total economic losses caused by hurricanes in 2017, relative to the average of the previous 16 years (Aon Benfield)

As storms grow more severe, insurers have a clear opportunity to offer increased protection to families, businesses and communities. Only 30% of catastrophic losses were covered by insurance between 2009 and 2018, according to Aon Benfield. It also estimates that there is a $180 billion global protection gap for weather-related risks.

Of course, insurers must be able to accurately model and price the risk of climate change if they are to collect more premium dollars. They must also understand the potentially detrimental impact of pricing customers out of the market and increasing the underserved community.

As societies around the world come to terms with the implications of global climate change, it’s clear that the insurance industry has a leading role to play in managing risk and offering protection. The earlier that firms grapple with and understand these complex climate-related risks, the more likely they are to derive value from them. Instead of waiting for perfect information, firms should take a flexible approach to this fast-moving topic and embed climate-related considerations into their decision-making.

The rise of ecosystems: Today’s insurance marketplace is hypercompetitive, with extremely tight margins, slow (if any) growth and high operating costs. The industry’s current economics are unsustainable, which means insurers need to rethink their business models.

See also: The Insurance Lead Ecosystem  

Ecosystems, which entail multiple companies partnering to offer specialized, but complementary, services in mutually beneficial ways, are one way for them to enhance the value of their offerings. Ecosystems can take many forms — strategic partnerships, alliances, mergers and acquisitions and joint ventures. The cloud, artificial intelligence and new data sources are key to enabling the development of ecosystems and other new business models.

Early adopters and forward-looking insurers can capture market share by defining their role in the ecosystem relative to other types of entities (e.g., sharing platforms, social media, insurtechs, data providers, customer associations and business services). By connecting with insurtechs, leaders can rapidly add innovative technologies and enhance business processes and customer experiences. Ecosystems and other new operating models will spark innovation and change multiple parts of the business.

Direct, digital and embedded sales will become dominant channels for growth, and ecosystems can help position insurers to capture their fair share of revenue. Subscription models will make insurance more deeply woven into consumers’ everyday lives, clarifying the value insurers deliver.

Ecosystems are one example of how insurers will change both what they deliver and how they deliver it. And the industry appears ready to adopt these models; a full 76% of insurance executives view partnerships and ecosystems as determinants of a future competitive advantage, according to Swiss Re. Small and mid-tier carriers that lack focus and differentiation may find it hard to make the required investments in people and technology, while achieving their financial targets.

More inclusive insurance: Insurers are well-positioned to help protect the many underinsured consumers and businesses around the world. They must find ways to engage younger consumers — so-called “generation rent” — sooner. As these consumers wait longer to purchase vehicles (which they may never do), buy homes, get married and have children, their first interactions with insurers happen later in life.

See also: Opportunities and Risks in the IoT  

Insurers must innovate with technology to engage and support the underinsured and other underserved markets. It’s worth noting how insurers in emerging markets exhibited great creativity in using mobile phones to provide microinsurance, asset-based coverages and embedded insurance purchases in their efforts to connect to the underinsured. These approaches are likely to succeed with the underserved and underinsured segments in mature markets, too. As carriers use greater amounts of information and advanced analytics, they need to be sensitive to pricing customers out of the market.

Seizing opportunity while navigating risk The fundamental question to ask is: Will growth opportunities outweigh the threats in the next wave of insurance? Insurers’ actions and investments in the next five to 10 years will determine if they maximize the upside of these opportunities or struggle with the downside.

The views expressed by the presenters are their own and not necessarily those of Ernst & Young LLP or other members of the global EY organization.


Ed Majkowski

Profile picture for user EdMajkowski

Ed Majkowski

Ed Majkowski is EY’s insurance sector leader for the Americas and is responsible for EY’s consulting businesses, markets and clients in this region.

How to Link Heart Health to Insurance

Life and critical illness products protect policyholders from financial loss but until now have helped little on safeguarding customers’ health.

Cardiorespiratory fitness (CRF) is a measure of the body’s ability to supply oxygen to muscles, including the heart, during sustained levels of exercise. Whether you believe the hype that just sitting around poses a significant health risk, the truth is that most people could do with exercising more. An inverse association between CRF and mortality is well-established. A recent study of the long-term mortality of physically active adults found the benefit of increased CRF is independent of age, sex, race/ethnicity and comorbidities. Exercise provides numerous health benefits, including reduction in coronary artery disease, hypertension, diabetes, stroke and cancer. The same study confirms that the greatest chance for survival is associated with the highest aerobic fitness, debunking the notion that exercise benefits plateau quickly or even result in harm. So there really is no excuse for running a bath instead of a mile, or indeed for avoiding any exercise you fancy that increases resting heart rate. This is good news. But as everyone’s level of cardio-fitness is different, the correct dose of exercise needed to confer any real benefit is less obvious. That gap in accessible knowledge is why it’s also good news that there has been such progress with Personalized Activity Intelligence (PAI), a health score that measures cardiorespiratory fitness (CRF). PAI helps add years of healthy life through personalized activity engagement and has been scientifically proven to reduce the risk of cardiovascular disease and early death. PAI provides individual guidance on the most beneficial exercise dose by measuring heart beat data and translating it to a PAI Score. See also: New Efficiencies in Life Insurance   PAI takes account of resting and maximum heart rates adjusted for exercise intensity and collected over a seven-day rolling period to encourage consistent exercise behavior. Any activity that increases the heart rate above a threshold and into the CRF training zone may generate points, meaning people of all fitness levels can score points from activities they enjoy; whether that’s kayaking down rapids, mowing the lawn or running after the grandkids. Physical activity can be measured simplistically but without much insight into the physical workload achieved. PAI, however, measures heart rate and uses an algorithm that calibrates to an individual’s heart effort and is helpful in creating personalized programs for sustained physical activity. PAI has shown the positive impact that sustained physical activity has on heart health and represents a more effective and realistic approach than setting daily step or exercise targets. The guidance indicates when the intensity of exercise does not contribute to increased levels of CRF or when fitter people with higher heart rate reserve (the difference between resting heart rate and maximum heart rate, which is used to calculate the optimal cardiorespiratory fitness level in aerobic exercise) need to challenge themselves more. Life and critical illness products do an amazing job protecting policyholders from financial loss but until now have provided little practical help in safeguarding customers’ health. We believe PAI has the potential to motivate behavioral change, helping policyholders to become more physically active and stick to it, while reducing their risk of disease and premature death. As insurance seeks to shift its emphasis from protection to prevention, this winning formula is possibly some of the best news yet. See also: Intersection of Tech and Holistic Health   Hear more about the science behind PAI from inventor Ulrik Wisløff, professor at NTNU and head of the Cardiac Exercise Research Group, in this short interview: To find out more, contact Ross Campbell.

Why Not to Make Opening Statements

Typically, opening statements are so inflammatory that a meeting aimed at resolution starts with animosity. Sometimes, one side walks out.

|
Times have changed. In the past, mediators would open a mediation by asking for opening statements from lawyers for each party. Problem was, though, these were typically so inflammatory that a meeting that was supposed to be about resolution started with animosity. Sometimes, one side walked out right then, before the real mediation even started. That’s why I have never invited opening statements at the start of a mediation. Lawyers no longer want opening statements, either. I have even had lawyers ask that there be no opening joint session with all parties present. Rather, they wanted to work with me only in caucus, one side meeting with the mediator,  keeping every communication confidential. The lawyers wanted to avoid the hostility that previously permeated the parties’ dealings. Unless there is strong objection, I start mediations in a joint session. I introduce myself and go over logistics: important stuff such as where the bathrooms are and how we will handle meal breaks. See also: How Mediation Should Progress   I also assure everyone that nothing bad can happen. The parties control the outcome, and there can be no result they did not agree to. Everything that happens in mediation is confidential and cannot be used against anyone in a different civil forum. To emphasize that rule, while we are still in the opening joint session every person present signs a confidentiality agreement. Then we typically break up into caucus. The only person who has made an opening statement is me, the mediator.

Teddy Snyder

Profile picture for user TeddySnyder

Teddy Snyder

Teddy Snyder mediates workers' compensation cases throughout California through WCMediator.com. An attorney since 1977, she has concentrated on claim settlement for more than 19 years. Her motto is, "Stop fooling around and just settle the case."

Second Step to a New, Successful Program

The right approach to collecting and analyzing market data will help avoid the potential pitfalls in launching an insurance program.

Editor’s Note: This is the third in a series of posts in which CJ Lotter, a 15-year industry veteran, shares lessons learned in the form of guidance to MGAs on the steps required to build a successful program. The first two articles are here and here. 

In our last post, we tackled the first key ingredient required for program creation: distress. In a perfect world, insurers would spot distress, respond with a product and sell it to huge success. This is not a perfect world. It takes time and diligence to carve out a new program. In this post, the third of our series, we’ll explain how to collect and analyze the data that will help you avoid the typical pitfalls and create a successful and profitable program. 

Carving Out a Profitable Program 

If a program should target a distressed or underserved class, how do you find one? You could conduct market research to identify an underserved industry and hire the underwriting expertise to go after it. But a better approach would be to start with the expertise you already have in-house. 

Underwriters with experience in a specific risk class are your best source of new program ideas. For example, let’s say you have been writing general commercial auto and transportation business for a long time. Your underwriters have gained experience over the years on the nuances of this market. Your book analysis shows that you write a high number of tow truck operators quite successfully. Additionally, one of your underwriters is intimately familiar with tow truck businesses and can tell a good risk from a bad one. With some quick research, you find there are a limited number of carriers in this market, with limited coverages. You also know, from our last post, that tow truck operations check the box for a distressed class.

 You now believe you have enough of an underwriting advantage to offer a specialized program for this market, but you need to validate the opportunity with good research. 

See also: The Evil Genius of a Wellness Program   

Gathering Data 

Good data is the backbone of a good program, and no expense should be spared to find the most accurate version of the truth. The more data collected in the early stages, the more likely you are to have a successful program or avoid a bad one. In our example, you want as much data on tow truck operators as possible. That means raw numbers and reports on the industry. 

Work with companies like Dun & Bradstreet to source the market data you need. Drill down at least four digits on SIC codes to find a target class as specific as possible. Filter and sort your findings by the attributes that matter most to your company. For example, if you are targeting large tow truck companies in New York, sort by number of employees, revenue and location. 

Preparing the Data 

Your initial data should provide an accurate sizing of the market for your potential program. Now you can pursue some qualitative and quantitative research to further evaluate the opportunity. Here are three to consider:

  1. Map the Data. Quadrant analysis is one of the most effective ways to visually analyze a diverse set of data. Map the companies from your target list onto a two-by-two grid to reveal patterns that will help narrow your focus. In our example, you may start with a grid that measures company revenue on one axis and claims on the other. You may also want to place the companies on a U.S. map to visualize where your target market is most concentrated.
  2. Form a Focus Group. From your list of target tow truck companies, build a focus group to validate the program opportunity. Call on 20 or so companies and ask them to discuss their needs and relevant details about their current insurance coverage. Supplement your focus groups with surveys to your full list. Keep survey questions to a minimum to maximize response rate, focusing on the most critical information, such as policy size.
  3. Narrow Your Target. Whittle your data down to a specific target niche. Say there are 10,000 potential customers in the tow truck market. Your research shows the average commercial auto policy is $10,000. If your goal is $1 million in commercial auto in the first year, you will need to write 100 policies. That’s 1% of your target universe. Is that feasible? If your sold to submitted ratio is 1:3, you will need to engage with 300 businesses. Do you have a marketing program that will generate that kind of volume?

Long-Term Planning 

Once you determine your initial opportunity, forecast the first three to five years. Make sure your chosen market is large enough to sustain rapid growth in the early years. The rule of thumb is to double your premiums in the first three years and drive to $10 million in premium as soon as possible. Just one limits loss can sink your program if you don’t have enough critical mass. 

See also: Innovation: ‘Where Do We Start?’   

Tying It All Together 

Thorough market research on the minute details is critical to evaluating a new program’s viability. Gut feelings are nice, but numbers give you real proof. Invest in research and analysis, and you will greatly improve your chances of a successful program. 

Excerpted with permission from Instec. A complete collection of Instec’s insurance industry insights can be found here.


CJ Lotter

Profile picture for user CjLotter

CJ Lotter

CJ Lotter is the director of engagement management at Instec. He spent nine years as chief research and business development officer at the U.S. programs division of Willis Towers Watson.

Understanding the Big Picture in Work Comp

Trends over the past few years have been pointing to declining rates in most states, but the retention market is a completely different animal.

In workers’ compensation, the trends over the past few years have been pointing to declining rates in most states. At the 2019 NCCI Annual Issues Symposium (NCCI-AIS), NCCI indicated the 2018 industry private carrier calendar year combined ratio was a record low of 83%. I heard comments from many carriers attending the NCCI-AIS that they were very surprised by this figure as it did not reflect what they were seeing on their book of business. To fully understand the workers’ compensation marketplace, it is very important to understand what information is included in NCCI and the independent bureau analysis in addition to the different ways they look at data. It is also important to understand the drivers that ultimately affect the costs of workers’ compensation. The calendar year combined ratio is not necessarily the most-reliable or accurate measure of rate adequacy or the profitability of a book of business. Instead, calendar year combined ratio is essentially an accounting measure that may be materially affected by things like carrier reserve strengthening or releasing of reserves for all prior accident years. A carrier could be writing unprofitable business yet still show a calendar year combined ratio below 100% if it is releasing prior-year reserves. A better measure to understand industry profitability is the accident year combined ratio. For 2018, NCCI indicated that this figure for private carriers was 89%. It is also important to understand what bureau data may NOT include. In general, it does not include any data from self-insured employers. That exclusion omits most data from municipalities, states and school districts. It also misses a significant amount of data from other industries, such as higher education, retail and healthcare. Bureau data may also exclude information from deductible policies (read those footnotes). It is estimated that the “retention” marketplace, defined as employers that retain risk through self-insurance or high deductibles, covers close to half of the payroll in the U.S. If the database does not include information from the retention market, it is missing a very big piece of the overall picture. You also need to check to see if the data set includes just “private carrier” information. If it does, it is likely excluding data from state funds, which tend to operate at much higher combined ratios. Also, keep in mind that there is no single source for workers’ compensation industry data. There are 15 states that have independent bureaus or are monopolistic. These states are not included in NCCI’s analysis. Three independent bureau states (CA, NY and WI) have more workers’ compensation payroll than the combined NCCI states. To further illustrate this point, the National Association of Insurance Commissioners (NAIC) indicated that the 2018 accident year combined ratio was 97%. In theory, the NAIC data set includes information from the bureaus around the nation, so it is likely closer to the actual industry figure in the guaranteed cost marketplace for private carriers. This explains why many carriers may not fully embrace the 83% combined ratio figure that was cited at the NCCI-AIS conference. NCCI data is accurate for what they analyze. But, because they only see a piece of the entire picture, their data may not be a true reflection of what is really going on in the entire workers’ compensation landscape, and it may not reflect what individual carriers are seeing on their book of business. It is a piece of the puzzle, but not the complete picture. See also: The State of Workers’ Compensation   According to data reported at the 2019 NCCI-AIS, over the last 20 years, the cumulative change in indemnity claim cost severity was 100%. This was about 20% higher than wage inflation. The cumulative change in medical lost time claim cost severity was 150%, which was 89% higher than medical inflation. During the same time period, carriers’ loss adjustment expenses (LAE) also increased steadily. LAE includes the costs of claims handling, including payroll, benefits and facility costs, as well as claim-specific expenses such as litigation costs. Data from the other bureaus shows similar trends, although California claim costs did drop after some significant reform legislation. Given the upward trend in costs over the last 20 years, why have we seen a decrease in rates the last few years? The answer is simple: frequency. NCCI data shows that, during the last 20 years, the average annual decrease in frequency was 3.9%. That is a significant decrease in the number of claims due to factors such as automation of certain tasks and an increased emphasis on safety and loss prevention. During the last few years, the decreases in frequency more than offset the increases in the average workers’ compensation claim costs, leading to declining rates in the guaranteed cost marketplace. The impact of frequency is a very important distinction between the performance of the guaranteed cost market and the retention marketplace. Thousands of small employers in the guaranteed cost market will have no claims. However, in the retention market, all large employers will have claims. Ultimately, it is claims severity (costs), not frequency, that determines the rates and profitability of the retention marketplace. There has been very little study of the retention marketplace, especially of the larger claims, as those cases tend to be outside the analysis of the bureaus. In September, the New York Compensation Insurance Rating Bureau (NYCIRB) published a study on loss development patterns for claims with incurred losses over $250,000. According to this study, “Large claims can take several years to emerge above the $250,000 threshold. Typically, only a small share of large claims are recognized as such at first report, and that share will grow considerably over the subsequent three or four reports.” The NYCIRB study noted that large claims only accounted for 4% of the claim count, but over 50% of the ultimate claim incurred losses. The study also illustrated how these larger claims tend to develop over time. The guaranteed cost industry standard is to use seven to 10 years of data to determine an experience rating. Thus, those carriers generally stop looking at loss data past 10 years post-accident. In the retention marketplace, things are very different. According to one large national retention market insurance company, at 10 years post-accident, only 70% of the claims that will ultimately breach the retention will have been reported to the carrier. This is because the most severe catastrophic claims that will exceed the retention are reported quickly, usually in the first 12 months. But the majority of remaining claims that will eventually breach the deductible/self-insured retention are not catastrophic injury claims at all, but instead are slow-developing claims that take years to reach required reporting thresholds. Because of this slow development of retention claims, at 10 years post-accident, actual claim case incurred is approximately 40% of the expected ultimate claim costs. So, when the first-dollar marketplace stops looking at the data, the retention marketplace is still actively seeing new claims, and significant additional incurred development is expected. As an example, consider a 30-year-old claim involving a now 62-year-old worker that recently necessitated a $1 million incurred increase. The claim had been reserved appropriately based on then-known information, but the exposure worsened, as the injured workers’ condition now requires 24/7 attendant care. The cost of 24/7 institutional attendant care can run $300,000 or more a year. The bureaus and the guaranteed-cost marketplace are not looking at development like that because it is occurring long after they stop monitoring such things. This is only one example of the extremely long claims tail in the retention marketplace. Because of this long tail, carriers in the retention market are affected more by increasing claims costs as they handle and pay out such long-duration claims for 60 years or more. There has been much publicity around the “shock losses” being seen in the general liability, auto and property marketplaces, with carriers seeing claims creep higher than ever. Factors such as excessive jury awards and runaway wildfires are contributing to carriers having to redefine what their worst-case exposures may be in these lines. The significant cost increases currently being seen in liability and property coverage are also being seen on catastrophic workers’ compensation injuries. Although catastrophic injury claims are only a tiny percentage of the total claims count, these injuries are a significant percentage of total workers’ compensation claim costs. Catastrophic injuries include spinal cord injuries, brain injuries, severe burns, major amputations and other severe traumatic injuries. There are several reasons for these rapidly increasing costs. First, consider that, unlike with group health insurance or Medicare, there is no policy limit or excluded treatments in workers’ compensation. The carrier is responsible for any treatment deemed reasonable to “cure or relieve” the injury without limitation. On workers’ compensation catastrophic injuries, it is common for the carrier to have to pay for things like attendant care, prosthetics, home and auto modifications, skin grafts or new housing, transportation and even experimental treatments. Standards of care for seriously injured individuals are constantly evolving. What was the norm five to 10 years ago is not the standard today, and in five years that standard will be even different. Think of all the medical innovation you see in the news regarding spinal cord injury recovery. The medical technology is evolving at a pace never seen before. Consider Christopher Reeve, the actor who suffered a spinal cord injury in 1995 that left him a quadriplegic. He was 43 years old at the time of the accident. Reeve received the best care money could buy from experts around the world. He lived less than 10 years after the accident. Fifteen years after his death, medical science has advanced to the point that a quadriplegic can live a near-normal life expectancy because physicians are able to prevent the complications that lead to shortened lifespans. Accident survivability is another factor affecting the increasing costs of catastrophic injury cases. Due to advances in emergency medicine, both on the scene of accidents and at Level 1 trauma centers, many patients who died shortly after their injuries will now live. According to the American College of Surgeons, from 2004-2016, the fatality rate for the most severe traumas declined over 18%. Every one of those cases likely results in millions of dollars in medical care. For example, I saw a severe burn claim that would have likely resulted in death within days 10 years ago. That person survived for three months, and, during that time, that individual received over $10 million in medical treatment. See also: 25 Axioms Of Medical Care In The Workers Compensation System   These rapid advances in treatment for catastrophic injuries are saving lives and significantly increasing the function and life expectancies of seriously injured patients. But they have also resulted in costs that have never been seen before by the workers’ compensation industry. When I started handling claims 29 years ago, $5 million individual claims were rare. Today, the workers’ compensation industry has seen numerous individual claims with incurred exposures over $5 million  and losses in excess of $10 million and even higher are becoming more frequent. These claims are likely to get even more costly as increasingly expensive medical advances come along. To understand the big picture in workers’ compensation, it is important to take a close look at the data you are relying on. Pay careful attention to understand what this data includes and what it does not. It is also important to distinguish between the guaranteed cost market and the retention market. Because the retention market has an extremely different developmental tail, rate trends are very different than in the guaranteed cost market. Claim frequency trends in the guaranteed cost market are fairly predictable and significantly influence rates. However, in the retention marketplace, rates are driven by severity, which is evolving to levels never seen before in a world of rapid medical advances.