Tag Archives: property insurance

Why Is Data on U.S. Property So Poor?

How a building is constructed and maintained and where it is located all have a massive impact on its potential to be damaged or destroyed. That knowledge is as old as insurance itself.

So why do so many underwriters still suffer from lack of decent data about the buildings they insure?

And when better data does get collected for U.S. properties, why does it seem to get lost as it crosses the Atlantic?

London is an important marketplace for insuring U.S. risks. It provides over 10% of the capacity for specialty risks — those that are hard, or impossible, to place in their home market through admitted carriers. Reinsurers of admitted carriers, insurers of homeowners and small businesses in the excess and surplus markets and facultative reinsurers of large corporate risks all need property data.

The emergence and growth of a new type of property insurers in the U.S. such as Hippo and Swyfft has been driven by an expectation of having access to excellent data. They are geared up to perform fast analyses. They believe they can make accurate assessments and offer cheaper premiums. The level of funding for ambitious startups shows that investors are prepared to write large checks, tolerate years of losses and have the patience to wait in the expectation that their companies will displace less agile incumbents. If this works, it’s not just the traditional markets in the U.S. that will be under threat. The important backstop of the London market is also vulnerable. So what can established companies do to counter these new arrivals?

Neither too hot nor too cold

The challenge for any insurer is how to get the information it needs to accurately assess a risk, without scaring off the customer by asking too many questions. The new arrivals are bypassing the costly and often inaccurate approach of asking for data directly from their insureds, and instead are tapping into new sources of data. Some do this well, others less so. We’re already seeing this across many consumer applications. They lower the sales barrier by suggesting what you need, rather than asking you what you want. Netflix knows the films you like to watch, Amazon recommends the books you should read, and soon you’ll be told the insurance you need for your home.

Health insurers such as Vitality are dramatically improving the relationship with their clients, and reducing loss costs, by rewarding people for sharing their exercise habits. Property insurers that make well-informed, granular decisions on how and what they are underwriting will grow their book of business and do so profitably. Those that do not will be undercharging for riskier business. Not a viable long-term strategy.

Fixing the missing data problem would be a good place to start.

We recently brought together 28 people from London Market insurers to talk about the challenges they have with getting decent quality data from their U.S. counterparts. We were joined by a handful of the leading companies providing data and platforms to the U.S. and U.K. markets. Before the meeting, we’d conducted a brief survey to check in on the trends. A number of themes emerged, but the two questions we kept coming back to were: 1) Why is the data that is turning up in London so poor, and 2) what can be done about it?

This is not just a problem for London. If U.S. coverholders, carriers or brokers are unable to provide quality data to London, they will increasingly find their insurance and reinsurance getting more expensive, if they can get it at all. Regulators around the world are demanding higher standards of data collection. The shift toward insurers selling direct to consumer is gathering momentum. Those that are adding frictional costs and efficiencies will be squeezed out. This is not new. Rapid systemic changes have been happening since the start of the industrial revolution. In 1830, the first passenger rail service in the world opened between Liverpool and Manchester in the northwest of England. Within three months, over half of the 26 stagecoaches operating on that route had gone out of business.

See also: Cognitive Computing: Taming Big Data  

Is the data improving?

Seventy percent of those surveyed believed that the data they are receiving from their U.S. partners has improved little, if at all, in the last five years. Yet the availability of information on properties had improved dramatically in the preceding 15 years. Why? Because of the widespread adoption of catastrophe models in that period. Models are created from large amounts of hazard and insurance loss data. Analyses of insured properties provide actionable insights and common views of risks beyond what can be achieved with conventional actuarial techniques. These analytics have become the currency of risk, shared across the market between insurers, brokers and reinsurers. The adoption of catastrophe models accelerated after Hurricane Andrew in 1992. Regulators and rating agencies demanded better ways to measure low-frequency, high-severity events. Insurers quickly realized that the models, and the reinsurers that used the models, penalized poor-quality data by charging higher prices.

By the turn of the century, information on street address and construction type, two of the most significant determinants of a building’s vulnerability to wind and shake, was being provided for both residential and commercial properties being insured for catastrophic perils in the U.S. and Europe. With just two major model vendors, RMS and AIR Worldwide, the industry only had to deal with two formats. Exchanging data by email, FTP transfer or CD became the norm.

Then little else changed for most of the 21st century. Information about a building’s fire resistance is still limited to surveys and then only for high-value buildings, usually buried deep in paper files. Valuation data on the cost of the rebuild, another major factor in determining the potential scale of loss and what is paid to the claimant, is at the discretion of the insured. It’s often inaccurate and biased toward low values.

If data and analytics are at the heart of insurtech, why does access to data appear to have stalled in the property market?

How does the quality of data compare?

We dug a bit deeper with our group to discover what types of problems they are seeing. In some locations, such as those close to the coast, information on construction has improved in the last decade, but elsewhere things are moving more slowly.

Data formats for property are acceptable for standard, homogeneous property portfolios being reinsured because of the dominance of two catastrophe modeling companies. For non-admitted business entering the excess and surplus market, or high-value. complex locations there are still no widely adopted standards for insured properties coming into the London market, despite the efforts of industry bodies such as Acord.

Data is still frequently re-keyed multiple times into different systems. Spreadsheets continue to be the preferred medium of exchange, and there is no consistency between coverholders. It is often more convenient for intermediaries to aggregate and simplify what may have once been detailed data as it moves between the multiple parties involved. At other times, agents simply don’t want to share their client’s information. Street addresses become zip codes, detailed construction descriptions default to simple descriptors such as “masonry.”

Such data chaos may be about to change. The huge inefficiency of multiple parties cleaning up and formatting the same data has been recognized for years. The London Market Group (LMG), a powerful, well-supported body representing Lloyd’s and the London company market has committed substantial funds to build a new Target Operating Model (TOM) for London. This year, the LMG commissioned London company Charles Taylor to provide a central service to standardize and centralize the cleaning up of the delegated authority data that moves across the market. Much of it is property data. Once the project is complete, around 60 Lloyd’s managing agents, 250 brokers and over 3,500 global coverholders are expected to finally have access to data in a standard format. This should eliminate the problem of multiple companies doing the same tasks to clean and re-enter data but still does nothing to fill in the gaps where critical information is missing.

Valuation data is still the problem

Information on property rebuilding cost that comes into London is considered “terrible” by 25% of those we spoke to and “poor quality” by 50%.

Todd Rissel, the CEO of e2Value, was co-hosting our event. His company is the third-largest provider of valuation data in the U.S. Today, over 400 companies are using e2Value information to help their policy holders get accurate assessments of the replacement costs after a loss. Todd started the company 20 years ago, having begun his career as a building surveyor for Chubb.

The lack of quality valuation data coming into London doesn’t surprise Todd. He’s proud of his company’s 98% success in accurately predicting rebuilding costs, but only a few states, such as California, impose standards on the valuation methods that are being used. Even where high-quality information is available, the motivation may not be there to use it. People choose their property insurance mostly on price. It’s not unknown for some insurers to recommend the lowest replacement value, not the most accurate, to reduce the premium, and the discrepancy gets worse over time.

Have the losses of 2017 changed how data is being reported?

Major catastrophes have a habit of exposing the properties where data is of poor quality or wrong. Companies insuring such properties tend to suffer disproportionately higher losses. No companies failed after the storms and wildfires of 2017, but more than one senior industry executive has felt the heat for unexpectedly high losses.

Typically, after an event, the market “hardens” (rates get more expensive), and insurers and reinsurers are able to demand higher-quality data. 2017 saw the biggest insurance losses for a decade in the U.S. from storms and wildfire — but rates haven’t moved.

Insurers and reinsurers have little influence in improving the data they receive.

Over two-thirds of people felt that their coverholders, and in some cases insurers, don’t see the need to collect the necessary data. Even if they do understand the importance and value of the data, they are often unable to enter it into their underwriting systems and pipe it digitally direct to London. Straight-through processing, and the transfer of information from the agent’s desk to the underwriter in London with no manual intervention, is starting to happen, but only the largest or most enlightened coverholders are willing or able to integrate with the systems their carriers are using.

We were joined at our event by Jake Hampton, CEO of Virtual MGA. Jake has been successful in hooking up a handful of companies in London with agents in the U.S. This is creating a far stronger and faster means to define underwriting rules, share data and assess key information such as valuation data. Users of Virtual MGA are able to review the e2Value data to get a second opinion on information submitted from the agent. If there is a discrepancy between the third party data that e2Value (or others) are providing and what their agent provides, the underwriter can either change the replacement value or accept what the agent has provided. A further benefit of the dynamic relationship between agent and underwriter is the removal of the pain of monthly reconciliation. Creating separate updated records of what has been written in the month, known as “bordereau,” is no longer necessary. These can be automatically generated from the system.

Even though e2Value is generating very high success rates for the accuracy of its valuation data, there are times when the underwriter may want to double-check the information with the original insured. In the past, this required a lengthy back and forth discussion over email between the agent and the insured.

JMI Reports is one of the leading provider of surveys in the U.S. Tim McKendry, CEO of JMI, has partnered with e2Value to create an app that provides near-real-time answers to an underwriter’s questions. If there is a query, the homeowner can be contacted by the insurer directly and asked to photograph key details in his home to clarify construction details. This goes directly to the agent and underwriter enabling the accurate and fast assessment of rebuild value.

What about insurtech?

We’ve been hearing a lot in the last few years about how satellites and drones can improve the resolution of data that is available to insurers. But just how good is this data? If insurers in London are struggling to get data direct from their clients, can they, too, access independent sources of data directly? And does the price charged for this data reflect the value an insurer in London can get from it?

Recent entrants, such as Cape Analytics, have also attracted significant amounts of funding. They are increasing the areas of the U.S. where they provide property information derived by satellite images. EagleView has been providing photographs taken from its own aircraft for almost 20 years. CEO Rishi Daga announced earlier this year that their photographs are now 16 times higher-resolution than the best previously available. If you want to know which of your clients has a Weber barbeque in the backyard, EagleView can tell you.

Forbes McKenzie, from McKenzie Insurance Services, knows the London market well. He has been providing satellite data to Lloyd’s of London to assist in claims assessment for a couple of years. Forbes started his career in military intelligence. “The value of information is not just about how accurate it is, but how quickly it can get to the end user,” Forbes says.

See also: How Insurtech Helps Build Trust  

The challenges with data don’t just exist externally. For many insurance companies, the left hand of claims is often disconnected from the right hand of underwriting. Companies find it hard to reconcile the losses they have had with what they are being asked to insure. It’s the curse of inconsistent formats. Claims data lives in one system, underwriting data in another. It’s technically feasible to perform analyses to link the information through common factors such as the address of the location, but it’s rarely cost-effective or practical to do this across a whole book of business.

One of the barriers for underwriters in London in accessing better data is that companies that supply the data, both new and old, don’t always understand how the London market works. Most underwriters are taking small shares of large volumes of individual properties. Each location is a tiny fraction of the total exposure and an even smaller fraction of the incoming premium. Buying data at a cost per location, similar to what a U.S. domestic insurer is doing, is not economically viable.

Price must equal value

Recently, the chief digital officer of a London syndicate traveled to InsureTech Connect in Las Vegas to meet the companies offering exposure data. He is running a POC against a set of standard criteria, looking for new ways to identify and price U.S. properties. He’s already seeing a wide range of approaches to charging. U.K.-based data providers, or U.S. vendors with local knowledge of how the information is being used, tend to be more accommodating to the needs of the London insurers. There is a large potential market for enhanced U.S. property data in London, but the cost needs to reflect the value.

Todd Rissel may have started his career as a surveyor and now be running a long-established company, but he is not shy about working with the emerging companies and doesn’t see them as competition. He has partnerships with data providers such as drone company Betterview to complement and enhance the e2Value data. It is by creating distribution partnerships with some of the newest MGAs and insurers, including market leaders such as Slice and technology providers like Virtual MGA, that e2Value is able to deliver its valuation data to over a third of the companies writing U.S. business.

Looking ahead

It is widely recognized that the London market needs to find ways to meaningfully reduce the cost of doing business. The multiple organizations through which insurance passes, whether brokers, third-party administrators or others, increase the friction and hence cost. Nonetheless, once the risks do find their way to the underwriters, there is a strong desire to find a way to place the business. Short decision chains and a market traditionally characterized by underwriting innovation and entrepreneurial leaders means that London should continue to have a future as the market for specialty property insurance. It’s also a market that prefers to “buy” rather than “build.” London insurers are often among the first to try new technology. The market welcomes partnerships. The coming generation of underwriters understands the value of data and analytics.

The London market cannot, however, survive in a vacuum. Recent history has shown that those companies with a willingness to write property risks with poor data get hit by some nasty, occasionally fatal surprises after major losses. With the increasing focus by the regulator and Lloyd’s own requirements, casual approaches to risk management are no longer tolerated. Startups with large war chests from both U.S. and Asia see an opportunity to displace London.

Despite the fears that data quality is not what it needs to be, our representatives from the London market are positive about the future. Many of them are looking for ways to create stronger links with coverholders in the U.S. Technology is recognized as the answer, and companies are willing to invest to support their partners and increase efficiency in the future. The awareness of new perils such as wildfire and the opening up of the market for flood insurance is creating opportunities.

Our recent workshop was the first of what we expect to be more regular engagements between the underwriters and the providers of property information. If you are interested in learning more about how you can get involved, whether as an underwriter, MGA, provider data, broker or other interested party, let me know.

How to Win the ‘Micro-Moment’

The P&C insurers that will win in our increasingly data-driven market are the companies that embrace the possibilities of technology and are able to own the “micro-moment”: Companies that reach consumers when they are making decisions and forming preferences will be ahead of the curve.

Communication technology now makes it possible for insurers to reach out to customers using automated voice, text, social media, email and other platforms. For example, when catastrophe looms, such as a major weather event, insurance companies have a great opportunity to protect policyholders and minimize losses by contacting customers.

This is not only good for the bottom line, because it avoids losses; it’s a great way to deliver an exceptional customer experience, which confers a competitive advantage. Insurance company executives instinctively see the value of using personalized communication to build loyalty and strengthen relationships. But not all companies are fully ready to take advantage of the possibilities of a closer connection with customers.

See Also: Data Science: Methods Matter

Executives worry about the quality and accuracy of the data they have on hand. That’s because many insurance companies only contact customers when processing a claim or following up on a late payment. Some use these opportunities to update their customer data, but since records verification only happens around transactions, a sizable portion of the company’s customer information is always outdated, and that can stymie efforts to own the micro-moment.

Take the connected catastrophe scenario, for example — because much of the customer base is always connected and has higher expectations around personalized communication than ever before, it makes sense to conduct customer outreach when a catastrophe is likely. By reaching out to customers, companies can contribute to customer safety, reduce losses and strengthen relationships.

A P&C company, with an insured population in the path of a hurricane or wildfire, might reach out via automated voice message, text, social media (e.g., Facebook or Twitter) or email to alert customers of the danger, provide advice on documenting insured property and inform customers on how to file claims once the event is over. The P&C company might also identify the location of mobile service centers.

The message this type of initiative sends to customers is unmistakable: The company is looking out for the customer and stands ready to assist during a tough time. And with modern communication technology, companies can implement a system capable of managing affordably customer outreach across multiple platforms, using automation to handle most of the workload.

Another issue is that many P&C companies don’t make a practice of asking for permission to contact customers or recording customer communication preferences. In addition to up-to-date contact information (including landline and mobile numbers), companies need to request communication preferences, such as whether the subscriber prefers to be contacted by voice, text or tweet.

Getting P&C company databases where they need to be to conduct widespread customer outreach in a personalized manner that respects customer communication preferences will take a large-scale data scrubbing effort at most companies. It can be conducted in-house if the insurer has sufficient resources to tackle such a project, or the company can choose to hire a third-party vendor.

When P&C insurers have the clean data they need, they can contact policyholders to help keep them safe, but that’s just the beginning. With clean data and the ability to automate communications using customer preferences, companies can reach out to customers about changing coverage needs, inquire about policy lapses, address late payments and much more.

The first step in fostering closer relationships with customers via personalized communication is making sure the information on hand is clean — data that has been verified as accurate. With clean data, forward-thinking insurance company leaders can ensure that consumer demand for greater personalization is met and that their company thrives in an increasingly data-driven economy.

Competing in an Age of Data Symmetry: Part 2

In 1983, Microsoft Word was introduced. It wasn’t the first word processor, and it isn’t the only word processor, but it quickly became a standard — a “given.” From a productivity standpoint, the first adopters of word processing certainly had advantages over the alternatives (typewriters and ball point pens). Today, however, we all use Word and Excel and Outlook. Your only advantage may be in how quickly you can type or speak into your device.

This is not unlike what is happening in the world of data. Data availability has become ubiquitous. Not only has data become freely available, but data analysis through tools, consolidators and rating companies has become freely available, as well. When everyone has access to the same information at the same time, that’s data symmetry.

In Part 1 of our data symmetry blog, we followed the quickly shifting trends from asymmetrical data availability to a market filled with data symmetry. We illustrated how data symmetry is rapidly changing the idea of competitive advantage. If everyone has access to the same data and if digital technologies are increasing the number of data sources, an organization’s proprietary data will lose the ability to keep the company ahead.

Data symmetry will then throw established insurers into a mid-life crisis, with everyone from marketing to underwriting to claims asking, “What makes our insurance actually different?”

Once insurers are operating from the same data, and the prediction of symmetrically available data has become a full-blown reality, then data will no longer be a differentiator, and something else will be.  But what?

The good news is that there will always be a way to create advantage if insurers remain active in fostering their uniqueness. From a data standpoint, here are three differentiators for your organization to consider:

Moving from individual histories to virtualized views

The data that is contained in today’s individual history will pale in comparison with tomorrow’s virtual record. In the very near future, everyone will take advantage of virtualized views of complete individuals or commercial accounts.

These will includ/.e every facet of someone’s lifestyle, health history, safety records, common travel patterns, activity levels and even purchase histories. Virtual individuals will be known and understood in ways that real individuals may not even know themselves. We already see this happening in online sales of music, books, movies, coffee, auto parts and tennis shoes. Where there is a purchase, there is a preference. Purchase patterns are allowing digital retailers to accurately predict which marketing messages will work with hyper-targeted methods. Modern insurers will use these same automations and data analysis to improve timing, not only for marketing but also for claims prevention. As virtual individual data interacts with external sources, such as geographic and weather data, the insurers who have been practicing their data science will become predictive pros. Predictive analytics will still allow some competitive asymmetry to exist.

Think of data streams as colors in a box of paints — the more colors one finds in the box, the clearer the picture that an insurer will be able to be paint. Data analytics experience will be the art classes that will make some insurers capable of predictive masterpieces. The old colors will still be in the box. Claims histories and proprietary risk models will still be available, but they will sustain their value when they are supplemented with fresh colors and new data perspectives.

Innovating around products and services

Predicting results and preventing claims will support business in the current realm of insurance. Both are still subject, however, to data symmetry. Data symmetry will, in turn, push insurers to innovate. What will be striking to see is how often these front-end innovations of all types will enhance back-end data capabilities.

Early in 2016, for example, Liberty Mutual and Subaru announced a partnership that will bring usage-based insurance into Subaru’s connected car platform. Usage-based insurance is one of the clearest examples of innovative products, fueled by data that will also improve data analytics. This involves a new measure of innovation — how quickly data can move from collection to analysis. The quicker an insurer can transfer data gathering into meaningful action, the more valuable the innovation. Companies will be asking what levels of automation can be employed to turn prediction into prevention.

They will also be looking for formulas that make innovative products or services attractive to consumers. Data innovations aren’t instantly palatable to people. In-car telematics devices are a great example. The initial innovation was somewhat offset by the expense of installation and the perception that an installed device invaded consumer privacy.

Most efforts at product innovation will make consumer incentives part of the formula. As insurers turn “free data” into better ratios between pricing and risk, both the insurer and the consumer will need to see the clear benefits. Residential insurance is an example of an area ripe for innovation. Home insurance premiums are most often paid within the house payment. Most homeowners would be thrilled to have their house payment go down $100 to $200 per month. Property insurers that can take advantage of home sensor data and Internet of Things data could make that happen.

In exchange for the savings, many homeowners would sign off on the idea that their insurer now has monitoring capability. Property insurers would then be adding home data to their available data streams. This could give carriers a competitive difference. Lender/insurer partnerships (additional product innovation) may also arise with greater frequency if lenders can find corollary trends between home monitor data and clients with the fewest incidents.

This same data/pricing correlation will apply to commercial insurance. If the use of drones, security system monitoring and environmental system monitoring will result in lower insurance costs, most companies will see the value in an insurer that is looking out for their bottom line.

Insurers will find some of their differentiation in data-driven, value-added services. Anywhere that data can point to a better practice, an insurer will want to promote that to customers. Whether that means suggesting alternative travel routes for trucking companies or promoting add-on products for specialized risk, the influence of data symmetry can be overcome with creativity and innovative thinking.

Focusing on the stars

When we discuss data, our mindset traditionally envisions incoming data. Customer experience data, however, is much more of a two-way data street. Consumers are painting a new world of service with their ratings and stars. These outside views are also subject to data symmetry. Prospects are now able to efficiently compare insurers with real service data, including both sources that are verifiable and those that contain unstructured, conjectural data.

In Competing in an Age of Data Symmetry, Part 3, we’ll look at what an insurer should be doing to prepare itself for greater customer scrutiny and how reputation analysis will validate or invalidate an organization’s brand promises.

Fascinating Patent Filing by State Farm

Sometimes other drivers can make you crazy. Maybe you’ve gestured to boneheaded motorists, safe in the anonymity of your car and the flow of traffic. Perhaps you’ve let your anger at other drivers get the best of you at times because there’s no one else in the car to judge.

But State Farm is on the case. It has developed plans to monitor your every move while you’re driving, measure your emotions, detect angry behavior and deliver stimuli such as music to calm you down.

The plans, as revealed in a patent application, would combine biometric measurements with automotive data to create a “total impairment score” that could be used to set customized car insurance rates.

“Every year, many vehicle accidents are caused by impaired vehicle operation,” State Farm says in its application, recently filed with the U.S. Patent and Trademark Office. “One common kind of impaired vehicle operation is agitated, anxious or aggressive driving.”

Are you sweating, yelling or waving your arms while you drive? State Farm’s “emotion management system” would use a variety of sensors and cameras to monitor your biometrics, including:

  • Heart rate
  • Grip pressure on the steering wheel
  • Body temperature
  • Arm movement
  • Head direction and movement
  • Vocal amplitude and pattern
  • Respiration rate

The system could use “infrared optical brain imaging data” to get deeper inside your head. State Farm might even know if you’re giving the evil eye to another driver: Measurements include gaze direction and duration, eyelid opening and blink rate.

And impaired driving is not confined to angry and aggressive drivers. State Farm also would consider nervousness, distraction and drowsiness. Other sensors would keep track of your vehicle: Are you swerving, accelerating or driving too close to other objects?

[Compare car insurance quotes through NerdWallet’s Car Insurance Comparison Tool.]

Smell this and calm down

If you are “emotionally impaired” – as measured by State Farm, not your spouse – the patent-pending system would select and deliver stimuli to change your behavior. The patent application outlines a variety of options, including relaxing music, a recorded message, sounds of nature, fragrance or a blast of cold air. The system might even suggest you stop at a coffee shop or scenic overlook.

Robert Nemerovski, a licensed clinical psychologist in the San Francisco area and an expert on anger management and road rage, was skeptical about State Farm’s patent. He questioned whether an automated system could be sophisticated enough to account for the unique characteristics of individual drivers.

“I would be concerned about individual differences: people on medication, the elderly vs. the young,” he said. “Maybe they have PTSD, or they’re in recovery from a heart attack. [State Farm] would need to know nuances of human behavior and human bodies.”

In addition, “People don’t want someone patronizing you or telling you to calm down. I’m not sure it would be successful psychologically because it would be rather annoying,” he says.

state farm patent

State Farm’s depiction of an emotional impairment score on a mobile device, from its patent application.

State Farm envisions an “emotion management system” that goes beyond just monitoring behavior. The system would store profiles for each driver so, for example, it would learn which music might reduce your hard braking or persuade you to stop tailing the car in front of you. This might spell an end to your loud music.

Each time you end a trip, the State Farm system would analyze the data and update your impairment score, which you could check on your mobile device.

Because the purpose of the patent application is only to describe the system, it leaves many unanswered questions, including:

  • How much would it cost per vehicle?
  • Who would pay?
  • How often would you have to refill your fragrance containers?

State Farm, the nation’s largest car insurance provider, declined to comment on specifics of the patent application but provided this statement to NerdWallet: “State Farm is actively innovating in a number of areas that are important to improving how we meet the needs of our customers. The patent  . . . is just one example of State Farm’s innovation. Because of the nature of our innovation work and patent program, we are unable to provide further comments at this time.”

Angry about car insurance bills, too?

According to the application, State Farm is considering applying the “comprehensive impairment level” to car insurance in several ways, including:

  • Adjusting your insurance rate, up or down.
  • Requiring you to buy a minimum amount of auto insurance, or limiting how much it will sell you.
  • Offering you a discount for using the system.
  • Flagging your policy for possible cancellation.

While State Farm’s plans may never be implemented, the carrier clearly has many ambitious ideas about monitoring customer behavior, such as its previously described ideas to price car insurance by the trip and deliver targeted ads based on where you drive.

Many consumers aren’t aware that auto insurers are preparing to unleash a tsunami of such services based on telematics, systems that track your car and driving habits. Progressive was the first to enter the space and dominated it for a while with its Snapshot usage-based insurance program.

“Other big auto insurers don’t want to be in second or third place again,” says Donald Light, director of North America property/casualty insurance for Celent, a research and consulting firm that focuses on information technology in financial services.

“I believe in about five years it will be a standard part of an auto insurance policy,” Light says.  “Insurers will say, ‘If you don’t want to use it that’s fine, too, but we’ll charge you based on not having it.’”

Light sees one large hurdle to State Farm’s emotion-management plan: The company will have to convince state insurance regulators that the emotional impairment scores accurately reflect risk.

“The key qualifier is that these kinds of data have to make actuarial significant difference in the ‘risk’ of different drivers,” Light says. For example, if State Farm wants to charge more based on driver agitation, the company will have to prove that agitation causes crashes.

Nemerovski says State Farm’s emotional management system might appeal to millennials, who are comfortable with the idea of measuring physical and other metrics so they can be improved.

“But I don’t think people would want it to be shoved down their throats,” he says.

The Future of Life Insurance

In its most recent report, “Tomorrow’s World; the Future of Aging in the U.K.,” the International Longevity Centre, a think tank focused on longevity, population and aging, painted a gloomy picture. The report says:

  • That the social care system is crumbling, and social class will heavily affect the life experience of the aged.
  • That housing and planning are inadequate to meet the needs of an aging population.
  • That individuals are underestimating their life expectancy and are likely to run out of money in old age.
  • That older people will suffer (and perhaps die) of different things: Where once the issue was heart and respiratory diseases, now it is likely to be illnesses of non-communication such as dementia.

It’s a worrying vision – one that perhaps is replicated in many other countries. The report recommends a bold 10-point action plan. It says:

1. Health must find a way to be more responsive and preventative.
2. Government must make progress in delivering a long-term settlement to pay for social care.
3. Savings levels for working age adults must increase.
4. The average age of exit from the workforce should rise.
5. The number and type of homes built should be increasingly appropriate for our aging society.
6. Government should make progress in facilitating greater risk sharing in accumulation of retirement income.
7. There is a need for a more informed older consumer.
8. Our aspirations for retirement must be about much more than us spending more hours watching television.
9. Businesses should better respond to aging.
10. The social contract needs to be strengthened between young and old.

Doesn’t the life and pension insurance industry have a part to play in almost all of this road map? Is there any reason why the industry should sit on the sidelines?

Here are five issues for the industry:

  • Insurers need to continue the shift from being reactive to being proactive – and must share the benefits with policyholders. Stakeholder buy-in through effective communication and enlightenment is critical – and it is increasingly becoming urgent.
  • Can insurers – on behalf of their policyholders, who are inevitably with them often for decades – influence issues related to home building and planning? I wonder how I would react if I really thought that my life and pension insurer was representing my interest to a point that it was lobbying about this type of stuff on my behalf?
  • The need for cooperation between the private and public sectors reinforces the need for empathy by both government and private insurers toward each other, perhaps with tacit agreement that they (we) are all in this together.
  • As the average age of workers increases, and some seek an alternative to watching TV or just trying to make ends meet, I wonder whether there is propensity for more workplace accidents. Isn’t there an employers liability/workers’ compensation angle to consider?
  • And, of course, how do we make life and pension insurance attractive to those starting their work life? Doesn’t the industry really need to make insurance both more relevant and fashionable?

Don’t insurers need to communicate better, engage differently, think more about the changing demographic footprint and generally step up the pace? All the innovation seems to be going into P & C insurance, but we can’t allow that to suck the energy from life and pension.

After all, having a “connected bedpan” as part of the Internet of Things might be useful for some – but don’t we need to be bolder than that in our thinking?