Tag Archives: understory

Growing Case for Parametric Coverage

Sadly, the insurance-focused news outlets are starting to overflow with references to who is suing whom over certain types of coverage related to the COVID-19 pandemic. There is a growing regulatory and legislative outcry for the insurance industry to pay out in instances where there is no specified coverage or where coverage is actually excluded. Both business and personal lines customers do not fully understand where they are (and are not) covered. It is a pretty dismal picture, and it is going to take a long time to sort all this out. In the meantime, a growing trend provides a glimmer of hope in all this chaos – parametric insurance.

Parametric insurance covers a specific event that can trigger a claim payment based on metrics from a recognized source such as the Richter scale for earthquakes or the number of hours a plane is delayed. While parametric insurance isn’t new – it has been available in emerging nations over the years – usage has been limited and sporadic. During 2019, there were undoubtedly some launches of more mainstream products such as Swiss Re’s Quake Assist product and Sompo’s flood product. However, this month, there have been at least four notable launches or expansions:

  • AXA Climate – AXA partnered with Dutch satellite technology firm VanderSat to derive triggers linked to soil moisture levels, enabling drought-related parametric insurance. The same soil reading technology can determine excess moisture, as well, triggering payment in either direction.
  • Global Parametrics/Arbol – Global Parametrics, a parametric and index-based disaster risk transfer company, teamed up with Arbol, a technology-driven marketplace that uses blockchain and smart contracts to provide weather risk insurance coverage to smallholder coffee farmers in Costa Rica.
  • Parsyl – Parsyl Insurance launched a suite of connected cargo insurance solutions for perishable goods, called ColdCover. Parsyl’s quality-monitoring and risk management platform leverages smart sensors and data analytics to manage the supply chain as well as loss control. The featured product within the company’s new suite is called ColdCover Parametric, which includes customized quality triggers and payout levels.
  • Understory – Understory initially launched its Hail Safe product for auto dealerships this past November but rolled it out to a significant number of additional states in April. The product coverage is triggered through the use of Understory’s proprietary hail sensor. Understory partnered with international weather risk manager MSI GuaranteedWeather to bring the product to market.

These examples are stated simply for brevity. But the scenarios are not that simple. For example, the Global Parametrics and Arbol example also includes an ecosystem of related parties in the transaction. And Parsyl provides services and an extensive risk management system so that cargo and fleet owners can manage exposures. From an education perspective, it is worth getting further details on all four scenarios. However, for purposes of this blog, the particularly hopeful note is that all this has happened in one month – the cycle of innovation and response is speeding up.

See also: Keeping Businesses Going in a Crisis  

Insurers and technology providers are coming together to find opportunities to create products that have specificity in terms of coverage and payment amounts. This is a very good thing! Insurers need to continue to seek opportunities to innovate in this area. Clearly, not all product lines are appropriate for parametric policies. However, in more instances than not, bringing sensors, aerial imagery, weather data and science to insurance products across all product segments can only help create transparency both in coverage creation and in loss settlement. This needs to be a goal for all insurers.

To Be or Not to Be Insurtech

It is probably a bit presumptuous to liken the insurtech startup movement to Hamlet’s famous “To be or not to be” soliloquy. It is, after all, a well-known and historical Shakespearean reference. However, the similarity is in the questions asked, and such a question has probably been asked prior to many defining moments. And just as Hamlet pondered many questions, there are many questions that revolve around the state of the insurtech movement. At this juncture, some five years into this movement, the one question that has most likely gone by the board is – Is it real?  You can debate whether we are at the beginning of the insurtech cycle or at the end. However, there are several strong points in favor of the fact that it is real.

See also: Convergence in Action in Insurtech  

SMA has been following the insurtech startup trends since 2013. Currently, we track approximately 1,200 insurtechs. It is definitely a fluid number. Some startups go out of business, and others come in to fill the void at a regular pace. In the 2013-2015 timeframe, the insurtech startup landscape was a tsunami of activity – it was difficult to get one’s arms around what was happening. In the latter half of 2017, some strong realities emerged. SMA’s recently released research findings have revealed several major insurtech trends or themes that are specific to insurance and have meaningful implications for the industry. In response to the “is this real” question, three of the 10 themes anchor the insurtech movement firmly in reality.

  • Insurtech has spread to all tiers and lines of business – Originally, most of the activity was in personal lines and health. Now, of the P&C contingent, which SMA data indicates is 39% of all the activity, a little over half is personal lines; 35% is commercial lines; 13% is workers’ comp. Historically, technology providers have targeted particular tiers for their sales efforts. The startup community targets insurance business problems without a specific tier focus. What this means is that insurers of all sizes are able to adopt insurtech-provided technology. SMA partnering data shows that there are insurtechs with customers ranging from top 10 insurers down to single-state insurers. The bottom line: The fact that insurtech is not focused on the top echelon of global players but rather on business problems across the insurance ecosystem lends itself to the “it’s real” theme.
  • Live implementations are increasing – Not surprisingly, in the beginning of the startup movement, most of the activity was around fundraising and proofs of concept. In 2017, and continuing at an accelerating pace in 2018, insurer “go lives” are happening. Some insurtechs have 10, 12 or more insurer logos on their websites. These are not investor listings; they are the names of insurers that are rolling out capabilities in the marketplace. In particular, drone usage, smart home/connected property and connected vehicle initiatives are common and growing. The “it’s real” indicator is that insurers are not going to roll out technology that affects their customers just for the fun of it – customers are not guinea pigs. Insurers are seeing the value in insurtech offerings and are executing.
  • Insurtechs are partnering – While there is nothing wrong with a technology provider staying in their space, a long-standing trend within the insurance industry has been partnering for greater value. This has not escaped the attention of a number of insurtechs. For example, Bold Penguin and Ask Kodiak have partnered, as have Elafris and Hippo and Betterview and Understory. Mature technology providers also see the value of startup partnering; for example, Willis Towers Watson and Roost, Verisk Analytics and Driveway. Majesco partners with a network of insurtechs. The “it’s real” factor is that insurtechs are not simply attempting to see what they can do just for today – but, rather, what they can do for the long haul, to become strategic contributors within the insurers they work with.

While there are still questions about the insurtech movement, one of them should not be – Is it real? Business value is being generated by many startups – and no insurer is going to walk away from that. New channels and service opportunities are emerging that are generating interest and execution. New products are sprouting up at a regular pace. Not every startup and every idea is going to be a winner, but many will be. And some already are. Bottom line? Both Hamlet and Shakespeare would be proud of the insurance industry for seeing the possibilities and not just the questions.

See also: 4 Key Qualities to Leverage Insurtech  

Insurtech Is Ignoring 2/3 of Opportunity

Fifty-six cents of every premium dollar is indemnity (loss costs). A further 12 cents is needed to assess, value and pay those losses. Given that two-thirds of the insurance industry economics are tied up in losses, it would be logical that much of the innovation we are now witnessing should focus on driving down loss costs and loss adjustment expense — as opposed to the apparent insurtech focus on distribution (and, to a lesser extent, underwriting).

This is beginning to happen.

What do you have to believe for loss costs and adjustment expenses to be a prime area of innovation and disruption? You have to believe that the process (and, thus, the costs) to assess, value and pay losses is inefficient. You have to believe that you can eliminate the portion of loss costs associated with fraud (by some estimates, as much as 20%). You have to believe that there is a correct amount for a loss or injury that is lower than the outcomes achieved today, particularly once a legal process is started. You have to believe that economic improvements can happen even as customer experience improves. And you have to believe that loss costs and adjustment expenses can decline in a world in which sensor technology starts to dramatically reduce frequency of losses and manufacturers embed insurance and maintenance into their “smart” products.

See also: ‘Digital’ Needs a Personal Touch  

Having spent years as an operating executive in the industry, I happen to believe all of the above, and I am excited by the claims innovation that is just now becoming visible and pulling all of the potential levers.

We are seeing an impact on nearly all aspect of the claims resolution value chain. Take a low-complexity property loss. Technology such as webchat, video calls, online claims reporting and customer picture upload are all changing the customer experience. While the technologies aren’t having a huge impact on loss adjustment or loss costs, they are having profound impact on how claims are subsequently processed and handled.

One such example, as many have heard, is how Lemonade uses its claims bot for intake, triage and then claims handling for renters insurance. Lemonade’s average claim is a self-reported roughly $1,200 (low value), and only 27% are handled in the moment via a bot as opposed to being passed to a human for subsequent assessment. Still, Lemonade certainly provides a window to the future. Lemonade is clearly attacking the loss-adjustment expense for those claims where it believes an actual loss has occurred and for which it can quickly determine the replacement value.

More broadly, Lemonade is a window into how many are starting to use AI, machine learning and advanced analytics in claims in the First Notice of Loss (FNOL)/triage process — determining complexity, assessing fraud, determining potential for subrogation and guiding the customer to the most efficient and effective treatment.

While Lemonade is the example many talk about, AI companies such as infinilytics and Carpe Data are delivering solutions focused specifically on identifying valid claims that can be expedited and on identifying those claims that are more questionable and require a different type of treatment. These types of solutions are beginning to deliver improvement in both property and casualty. New data service providers — such as Understory, which provides single-location precision weather reports — can be used to identify a potential claim before even being notified, which can reduce loss costs through early intervention or provide reference data for potentially fraudulent claims.

Equally interesting is the amount of innovation and development appearing in the core loss-adjusting process. Historically, a property claim — regardless of complexity — would be assessed via a field adjuster who evaluates and estimates the loss. Deploying technical people in the field can be very effective, but it is obviously costly, and there is some variability in quality.

In a very short time, there are very interesting new models emerging that reimagine the way insurers handle claims.

Snapsheet is providing an outsourced solution that enables a claimant of its insurance company customers to use a service that is white-labeled for clients. The service enables the claimant to take pictures of physical damage, which is then “desk adjusted” to make a final determination of the value of the claim, followed by a rapid and efficient payment.

WeGoLook, majority-owned by claims services company Crawford & Co, is using a sophisticated crowd-sourced and mobile technology solution to rapidly respond to loss events with a “Looker” (agent) who can perform a guided process of field investigation and enable downstream desk adjusting process, as well.

Tractable provides artificial intelligence that takes images of damaged autos and estimates value (effectively a step toward automatic adjudicating). Tractable — like, Snapsheet and WeGoLook — has made great strides. Aegis, a European motor insurer, is rolling out Tractable following a successful pilot. In each of these instances, the process is much improved for customers — whether it be self-serving because they choose to do so (Snapsheet), rapidly responding to the event (WeGoLook) or dramatically reducing the cycle time (Tractable). All provide material improvements in customer experience.

See also: Waves of Change in Digital Expectations  

Obviously, each of these models is attacking the loss adjustment expense — whether through a more consistently controlled process of adjusting at a desk, using AI to better assess parts replacement vs. repair or improving subrogation, among other potential levers.

Today, all of these solutions are rather independent of each other and generally address a low-complexity property loss (mostly in the auto segment), but the possible combination of these and other solutions (and how they are used depending on type and complexity of claims) could begin to amplify the impact of technology innovation in claims.

Industry’s Biggest Data Blind Spot

For the past 10 years, the insurance industry has been handcuffed by the weather data that’s been available to it – primarily satellite and radar. Although important, these tools leave insurers with a blind spot because they lack visibility into what is happening on the ground. Because of these shortcomings, insurance companies are facing unprecedented litigation and increases in premiums. To solve the problem, we must first review the current situation as well as what solutions have been proposed to resolve this data blind spot.

Why Satellite and Radar Aren’t Enough

While satellites and radar tell us a lot about the weather and are needed to forecast broad patterns, they leave large blind spots when gathering information about exactly what is happening on the ground. Current solutions only estimate what’s happening in the clouds and then predict an expected zone of impact, which can be very different than the actual zone of impact. As many know from experience, it is common for storms to have pockets with more intense storm damage, known as hyper-local storms.

See also: Why Exactly Does Big Data Matter?  

The Rise of the Storm-Chasing Contractor

In recent years, the industry has also been beleaguered with a new obstacle: the storm-chasing contractor. These companies target areas that have been hit by a storm with ads on Craigslist and the like. They also exploit insurer’s blind spots by canvassing the area and making homeowners believe there was damage, regardless of whether damage actually occurred. This practice can leave the homeowner with hefty (and unnecessary) bills, hurt the entire industry and lead to higher litigation costs.

Attempts to Solve the Data Blind Spot

Many companies have proposed solutions that aim to solve the insurance industry’s data blind spot. Could a possible solution lie in building better algorithms using existing data? Realistically, if the only improvement made is to the current models or algorithms using existing data, there’s no real improvement because the data the algorithm is using still has gaps. Algorithms will continue to create a flawed output and will have no improved ability to create an actionable result. The answer must lie in a marked improvement in the foundational data.

If better data is required to solve this blind spot, one might think that a crowd-sourced data source would be the best alternative. On the surface, this solution may appear to be a good option because it collects millions of measurements that are otherwise unavailable. The reality is that big data is only relevant when you can build true value out of the entire data set and, while cell phones provide millions of measurements, the resulting cleaned data remains too inaccurate for crowd-sourced weather data to provide a reliable dataset.

The alternative crowd-sourced weather networks that use consumer weather stations to collect data also lead to huge problems in data quality. These weather stations lack any sort of placement control. They can be installed next to a tree, by air conditioning units or on the side of a house – all of which cause inaccurate readings that lead to more flawed output. And although these types of weather stations are able to collect data on rain and wind, none are able to collect data on hail – which causes millions of dollars in insurance claims each year.

The Case for an Empirical Weather Network

To resolve the insurance industry’s blind spot, the solution must contain highly accurate weather data that can be translated into actionable items. IoT has changed what is possible, and, with today’s technology, insurers should be able to know exactly where severe weather has occurred and the severity of damage at any given location. The answer lies in establishing a more cost-effective weather station, one that is controlled and not crowd-sourced. By establishing an extensive network of weather stations with controlled environments, the data accuracy can be improved tremendously. With improved data accuracy, algorithms can be reviewed and enhanced so insurers can garner actionable data to improve their storm response and recovery strategies.

Creating an extensive network of controlled weather stations is a major step toward fixing the insurance industry’s data blind spot, but there is one additional piece of data that is required. It is imperative that these weather stations measure everything, including one of the most problematic and costly weather events – hail. Without gathering hail data, the data gathered by the controlled weather stations would still be incomplete. No algorithm can make up for missing this entire category of data.

See also: 4 Benefits From Data Centralization

While technology has improved tremendously over the past 10 years, many insurers continue to use traditional data that has always been available them. Now is the time for insurers to embrace a new standard for weather data to gain insights that eliminate their blind spot, improve their business and provide better customer experiences.

Understory has deployed micro-networks of weather stations that produce the deep insights and accuracy that insurers need to be competitive today. Understory’s data tracks everything from rain to wind to temperature and even hail. Our weather stations go well beyond tracking the size of the hail; they also factor in the hail momentum, impact angle and size distribution over a roof. This data powers actionable insights

Quick Takes From Insuretech Connect

Last week, I was excited to attend the first Insuretech Connect conference, which brought together entrepreneurs, VCs and industry insiders to focus on the innovative (and some say disruptive) developments within the industry. I wanted to get a closer view of the emerging technology and begin to hear a clearer message about how these developments are connected with the core issues facing the industry, such as: the industry in total very rarely delivers cost of capital returns; the products are complex, and structured in ways that make them not easily consumable by customers; there is aversion to new risks by the carriers given lack of credible loss information used for pricing; a third of P&C premium is absorbed in cost of sales and delivery, an unsustainable figure; etc.

With the event behind us, here are my top takeaways:

1. There are fantastic stories beginning to emerge about the engagement of millennials (notoriously uninterested in insurance products) that over time could be hugely instructive for the broader industry.

Both Trov and Lemonade are genuinely different, with an experience that is more akin to a social media exchange with your friends as opposed to the arduous image (and sometimes reality) of most insurance buying, servicing and claims interactions. Both appear to have genuinely rethought the product being delivered.

In the case of Lemonade, the company has removed the implicit contention between insured and customer with an affinity-oriented dimension: Excess premiums not used to pay claims go to a charity of the customer’s choice. These factors alone (I will cover more below) fundamentally reposition the insurance provider in the mind of the consumer.

Trov is delivering an on-demand, single-item, micro-duration coverage – a genuinely innovative product concept. The takeaway here is that true innovation in customer experience is unlikely if there isn’t innovation in the product. Trov also provides its user with an app that has real value to the consumer independent of the insurance cover — effectively the app is a a super-easy-to-use personal asset register.

The “value in use” delivered in this app is a launch point for an entirely different type of engagement. Metromile is doing the same thing with its free smart driving app, which helps you with  where you parked your car, with diagnostics and maintenance and with trip planning. The Metromile app has tremendous value to its users independent of the usage-based insurance the app provides.

So the real question for the industry is whether Lemonade and Trov are just great ingenuity to deliver renters and single-item coverage to a segment that is meaningfully under-penetrated and uninterested in insurance, or whether these fundamental innovations will be harnessed and applied by others not just elsewhere in personal lines but in commercial and specialty lines, as well.

2. Unsurprisingly, the conference was dominated with many who are endeavoring to attack the distribution part of the value chain by changing customer experience and the cost to deliver those experiences. Many of the entrepreneurs are borrowing pages from the countless other categories that have gone through dramatic changes in distribution (financial services, travel, etc.).

It is early days, but I look forward to companies such as Embroker, which is legitimately trying to re-create the entire customer-broker experience (focused on the more complex middle-market commercial risks), with technology as a critical enabler.

One far narrower example is Terrene Labs, which is a really interesting play on big data that potentially flips the application-for-insurance process for commercial insurance on its head. Effectively, the company is developing the technology that combs the public domain to create a near-completed (and far-higher-quality) insurance application based on only a handful of questions. I highlight this venture led by the ex-CIO of Great American as he is seeking to improve the customer experience in small commercial while simultaneously slashing the front-end agency cost of entering the application data to carrier’s on-line systems.

I suspect that the much-anticipated launch of Attune, the initiative backed by Hamilton-Two Sigma-AIG, will feature this sort of change in experience. I anticipate the developments next year on distribution are going to be far more robust and measurable.

3. While there is an intensifying discussion about the Internet of Things (IoT) and the exponentially increasing data that can be accessed to evaluate risk — including sensor technology that can convert risk taking into a continuously monitored, pay-as-you-go model (even in liability classes) — most of this is futurist stuff. The exceptions are usage-based insurance (UBI) in auto, some modest developments in smart home and increasingly smart machinery monitoring you find in a variety of commercial applications.

Yet one company really stood out in its ambitions. The company, Understory, has been installing micro weather stations (wireless, solar-powered, etc.) to get a far more finite view of rain, hail, wind, etc. than the National Weather Service can provide. During a panel discussion, the CEO noted that the company can put 60 of these micro weather stations in a city for the cost of a single large radar system (around $200,000).

It is difficult to cite the specific loss to the industry of straight-line wind and hail (it runs in the tens of billions of dollars in the U.S. alone each year), and hail loss is notoriously difficult given the sometimes long tail to discover it and, in certain cases, the high fraud rate and difficulty to empirically verify whether a hail storm that occurred during a specific period of insurance coverage caused the damage.

But the sort of innovation occurring at Understory was one of the few focused on a core aspect where the risk takers can improve performance and meaningfully reduce loss costs. This is not to say that the many excellent developments around machine learning and predictive analytics applied to underwriting and claims is not similarly attacking these sorts of costs, it is just that Understory is unusual in that it is a tangible quantum improvement in data that can drive improvement in loss costs.

Look out for the next wave of “Understories” and to more tangible results from the variety of vendors pushing the machine learning/big data angle for both claims and underwriting,

4. I finish with my “not so impressed” takeaway. The most obvious aspect missing at the conference was a good economic understanding of the insurance industry by many of the entrepreneurs selling their wares. In some cases, including panelists, they were flatly wrong in their assertion and some showed little regard for the facts.

Even Daniel Schreiber, the CEO of Lemonade (whom I found to be thoroughly entertaining, insightful and articulate about many things, including behavioral economics), responded to a query from the interviewer/moderator in a way that indicates that some independent research suggests that the pricing of Lemonade’s product is a fraction of competitors. Schreiber suggested that the 25% cost for distribution (I interpreted this as total commission) and 40% total operating costs for the industry, compared with the “20% management fee Lemonade charges its customers,” is a key contributor to the difference in costs.

Underlying Schreiber’s comments was an obvious point that the cost of today’s insurance product to the customer is far too high and that innovation has to drive down costs for the insurer and prices for the consumer. At least Schreiber took on the issue in a thoughtful way.

Unfortunately, though, the 25% and 40% numbers are simply wrong. I go back to the factual economics of our industry. The INDUSTRY IN TOTAL DOES NOT EARN COSTS OF CAPITAL, so the industry in total is not getting paid for the risk it is taking. In 2015, 31% of premium (not 40%) went to sales and service. In personal lines, the numbers are far lower. As a reference point, Progressive’s total expense ratio is just under 20%, and Travelers homeowners expense ratio hovers around 28% (with a large part in commissions, given their retail distribution model ).

I am not suggesting that the industry is not ripe for some disruption, but that those are seeking to disrupt (or even enable) it need to understand the macroeconomics and then follow the money (kind of what Understory is doing).

Back to Lemonade. I can imagine that the company has built its infrastructure in such a way that the investors will get an appropriate return from the 20% management fee. I can further imagine that the model may self-select a better class of renters than the wider population and that maybe the fundamental proposition reduces fraud-driven loss costs, so a far lower price could be justified. Yet only a few of those at the conference started with a good foundation of industry and value chain economics, an understanding of the unique regulatory and product attributes that will remain for the foreseeable future, and where and how underwriting and loss performance can be improved.

As these issues come into focus, I suspect that the innovations will begin to fulfill the expectations that are building in the insurtech space.