Tag Archives: Target Operating Model

New Operating Model for Insurers (Part 2)

In my previous article (New Operating Model for Insurers (Part 1)), I set out the steps that an insurer can take to redesign its operating model to meet its new business priorities. Here, I’m going to offer some observations on what that new insurance Target Operating Model might look like.

My core assumption is that the insurer wants a model that focuses on delivering a high-quality customer experience at a reasonable cost.

To keep it manageable, I’ll restrict my Target Operating Model’s scope to:

  • Locations
  • People Organization
  • Governance
  • Major Business Processes
  • Key Technologies

…and I’ll go no deeper than the design principles level.


I’ll start with locations, as this is a hot topic in a pandemic world.

Insurers’ responses to COVID-19 have proved that remote working is a lot easier than many had thought. But I’m not anticipating that “everyone works from home” will often be adopted as a design principle. Why? Because in certain circumstances there are still benefits from, and even a necessity for, some face-to-face interaction. So I would expect to see design principles such as:

  • All staff should be enabled to do most of their work without attending an office
  • But all staff (or perhaps just certain types or grades of staff) must live within X hours of an office that they can use when needed or when they desire
  • Offices must have flexible space catering for individual working and interactions of multiple types and sizes (for example 1:1s, small group meetings, large group meetings, workshops)
  • Where possible, staff should be hired in low-cost locations ahead of high-cost locations

Each insurer will need to set its own rules for when staff must attend an office and figure out what its resulting occupancy levels and space requirements might be.

Beyond this, there will be customer expectations or regulatory requirements that compel insurers to have physical offices in locations (typically states or countries) in which they sell. Hence, there will often be a principle that:

  • We will locate offices to meet the expectations of both our customers and our regulators

People Organization and Governance

At the level of this article, we can treat these two dimensions together.

As people are replaced by automation, I would expect to see increasingly flat organization structures. So a Target Operating Model design principle might be:

  • Our organization structure will have no more than five layers

There might also be a corresponding principle setting out minimum or maximum spans of control.

But if there are few layers, and staff are highly distributed, there need to be ways to ensure that staff work effectively and make smart decisions without being micro-managed. Possible design principles for governance might therefore be:

  • The mission, values, goals and objectives of the company are the primary arbiter of whether a proposed action or decision is a good one and must be visible to all employees at all times
  • Role descriptions will clarify the limits of a role’s authority and autonomy (its scope, or guard rails), but within those our staff will enjoy a high degree of independence

Of course, a switch to this type of organization is unlikely to be possible without high-quality change management in place.

See also: Should Insurers Use Amazon Model?

Major Business Processes

In A New Target Operating Model for Insurers – Part 1, I offered the following generic insurance value chain, which, at this level of abstraction, covers both P&C and life:

A Value Chain for an Insurer

It’s beyond the scope of this article to take each element in turn, but here are potential design principles that could apply to pretty much all of them:

  • The process should be customer-centric: designed to fulfill its customers’ needs at the highest quality, at lowest cost, in the shortest time
  • The process should be both common and shared across business lines, products and geographies to the greatest extent possible
  • Automation (core automation, artificial intelligence, robotic process automation, etc.) should be employed as widely as possible, to accelerate delivery and minimize processing errors
  • Unless we possess differentiating expertise in a process or sub-process, it should be outsourced to one or more third parties possessing the appropriate expertise

Some of these customer experience and efficiency themes are explored, in a greater level of detail, on my website focused on the Insurer of the Future.

Key Technologies

I’ve left key technologies until last, as some of the major design principles here depend on the areas already covered.

Here, I would increasingly expect to see design principles along the following lines:

  • The primary function of our technology is to automate our Major Business Processes
  • The primary non-functional requirement of our technology is that it be secure from external and internal threats
  • Our IT systems will be shared across business lines, products and geographies to the greatest extent possible
  • Our IT systems will be available to our staff from any location they choose
  • Data will be shared across business lines, products and geographies to the greatest extent allowed by applicable regulation
  • Unless we possess differentiating expertise in a particular aspect of technology (or its design, development or running), it should be outsourced to one or more third parties possessing the appropriate expertise

It is also possible that design principles at this level might lead directly to additional, more specific, principles such as:

  • We will maximize the use of cloud technologies

Technology change is often, of course, the type of change that takes longest to deliver. So it wouldn’t be surprising to see one or more interim Target Operating Models (see New Operating Model for Insurers (Part 1)) as staging points on the roadmap for technology transformation.

See also: 10 Tips for Moving Online in COVID World

* * *

It’s an interesting time for insurers, with the changes to insurers’ business models arising from COVID-19 set against a background of continuing opportunities for digital transformation.

Using the approach set out in A New Target Operating Model for Insurers – Part 1, and developing a set of design principles such as the above, an insurer will be well-placed to design its new Target Operating Model.

And using the specific design principles as a starting point for discussion will help set an insurer on the road to delivering a higher-quality customer experience at a reasonable cost.

Why Is Data on U.S. Property So Poor?

How a building is constructed and maintained and where it is located all have a massive impact on its potential to be damaged or destroyed. That knowledge is as old as insurance itself.

So why do so many underwriters still suffer from lack of decent data about the buildings they insure?

And when better data does get collected for U.S. properties, why does it seem to get lost as it crosses the Atlantic?

London is an important marketplace for insuring U.S. risks. It provides over 10% of the capacity for specialty risks — those that are hard, or impossible, to place in their home market through admitted carriers. Reinsurers of admitted carriers, insurers of homeowners and small businesses in the excess and surplus markets and facultative reinsurers of large corporate risks all need property data.

The emergence and growth of a new type of property insurers in the U.S. such as Hippo and Swyfft has been driven by an expectation of having access to excellent data. They are geared up to perform fast analyses. They believe they can make accurate assessments and offer cheaper premiums. The level of funding for ambitious startups shows that investors are prepared to write large checks, tolerate years of losses and have the patience to wait in the expectation that their companies will displace less agile incumbents. If this works, it’s not just the traditional markets in the U.S. that will be under threat. The important backstop of the London market is also vulnerable. So what can established companies do to counter these new arrivals?

Neither too hot nor too cold

The challenge for any insurer is how to get the information it needs to accurately assess a risk, without scaring off the customer by asking too many questions. The new arrivals are bypassing the costly and often inaccurate approach of asking for data directly from their insureds, and instead are tapping into new sources of data. Some do this well, others less so. We’re already seeing this across many consumer applications. They lower the sales barrier by suggesting what you need, rather than asking you what you want. Netflix knows the films you like to watch, Amazon recommends the books you should read, and soon you’ll be told the insurance you need for your home.

Health insurers such as Vitality are dramatically improving the relationship with their clients, and reducing loss costs, by rewarding people for sharing their exercise habits. Property insurers that make well-informed, granular decisions on how and what they are underwriting will grow their book of business and do so profitably. Those that do not will be undercharging for riskier business. Not a viable long-term strategy.

Fixing the missing data problem would be a good place to start.

We recently brought together 28 people from London Market insurers to talk about the challenges they have with getting decent quality data from their U.S. counterparts. We were joined by a handful of the leading companies providing data and platforms to the U.S. and U.K. markets. Before the meeting, we’d conducted a brief survey to check in on the trends. A number of themes emerged, but the two questions we kept coming back to were: 1) Why is the data that is turning up in London so poor, and 2) what can be done about it?

This is not just a problem for London. If U.S. coverholders, carriers or brokers are unable to provide quality data to London, they will increasingly find their insurance and reinsurance getting more expensive, if they can get it at all. Regulators around the world are demanding higher standards of data collection. The shift toward insurers selling direct to consumer is gathering momentum. Those that are adding frictional costs and efficiencies will be squeezed out. This is not new. Rapid systemic changes have been happening since the start of the industrial revolution. In 1830, the first passenger rail service in the world opened between Liverpool and Manchester in the northwest of England. Within three months, over half of the 26 stagecoaches operating on that route had gone out of business.

See also: Cognitive Computing: Taming Big Data  

Is the data improving?

Seventy percent of those surveyed believed that the data they are receiving from their U.S. partners has improved little, if at all, in the last five years. Yet the availability of information on properties had improved dramatically in the preceding 15 years. Why? Because of the widespread adoption of catastrophe models in that period. Models are created from large amounts of hazard and insurance loss data. Analyses of insured properties provide actionable insights and common views of risks beyond what can be achieved with conventional actuarial techniques. These analytics have become the currency of risk, shared across the market between insurers, brokers and reinsurers. The adoption of catastrophe models accelerated after Hurricane Andrew in 1992. Regulators and rating agencies demanded better ways to measure low-frequency, high-severity events. Insurers quickly realized that the models, and the reinsurers that used the models, penalized poor-quality data by charging higher prices.

By the turn of the century, information on street address and construction type, two of the most significant determinants of a building’s vulnerability to wind and shake, was being provided for both residential and commercial properties being insured for catastrophic perils in the U.S. and Europe. With just two major model vendors, RMS and AIR Worldwide, the industry only had to deal with two formats. Exchanging data by email, FTP transfer or CD became the norm.

Then little else changed for most of the 21st century. Information about a building’s fire resistance is still limited to surveys and then only for high-value buildings, usually buried deep in paper files. Valuation data on the cost of the rebuild, another major factor in determining the potential scale of loss and what is paid to the claimant, is at the discretion of the insured. It’s often inaccurate and biased toward low values.

If data and analytics are at the heart of insurtech, why does access to data appear to have stalled in the property market?

How does the quality of data compare?

We dug a bit deeper with our group to discover what types of problems they are seeing. In some locations, such as those close to the coast, information on construction has improved in the last decade, but elsewhere things are moving more slowly.

Data formats for property are acceptable for standard, homogeneous property portfolios being reinsured because of the dominance of two catastrophe modeling companies. For non-admitted business entering the excess and surplus market, or high-value. complex locations there are still no widely adopted standards for insured properties coming into the London market, despite the efforts of industry bodies such as Acord.

Data is still frequently re-keyed multiple times into different systems. Spreadsheets continue to be the preferred medium of exchange, and there is no consistency between coverholders. It is often more convenient for intermediaries to aggregate and simplify what may have once been detailed data as it moves between the multiple parties involved. At other times, agents simply don’t want to share their client’s information. Street addresses become zip codes, detailed construction descriptions default to simple descriptors such as “masonry.”

Such data chaos may be about to change. The huge inefficiency of multiple parties cleaning up and formatting the same data has been recognized for years. The London Market Group (LMG), a powerful, well-supported body representing Lloyd’s and the London company market has committed substantial funds to build a new Target Operating Model (TOM) for London. This year, the LMG commissioned London company Charles Taylor to provide a central service to standardize and centralize the cleaning up of the delegated authority data that moves across the market. Much of it is property data. Once the project is complete, around 60 Lloyd’s managing agents, 250 brokers and over 3,500 global coverholders are expected to finally have access to data in a standard format. This should eliminate the problem of multiple companies doing the same tasks to clean and re-enter data but still does nothing to fill in the gaps where critical information is missing.

Valuation data is still the problem

Information on property rebuilding cost that comes into London is considered “terrible” by 25% of those we spoke to and “poor quality” by 50%.

Todd Rissel, the CEO of e2Value, was co-hosting our event. His company is the third-largest provider of valuation data in the U.S. Today, over 400 companies are using e2Value information to help their policy holders get accurate assessments of the replacement costs after a loss. Todd started the company 20 years ago, having begun his career as a building surveyor for Chubb.

The lack of quality valuation data coming into London doesn’t surprise Todd. He’s proud of his company’s 98% success in accurately predicting rebuilding costs, but only a few states, such as California, impose standards on the valuation methods that are being used. Even where high-quality information is available, the motivation may not be there to use it. People choose their property insurance mostly on price. It’s not unknown for some insurers to recommend the lowest replacement value, not the most accurate, to reduce the premium, and the discrepancy gets worse over time.

Have the losses of 2017 changed how data is being reported?

Major catastrophes have a habit of exposing the properties where data is of poor quality or wrong. Companies insuring such properties tend to suffer disproportionately higher losses. No companies failed after the storms and wildfires of 2017, but more than one senior industry executive has felt the heat for unexpectedly high losses.

Typically, after an event, the market “hardens” (rates get more expensive), and insurers and reinsurers are able to demand higher-quality data. 2017 saw the biggest insurance losses for a decade in the U.S. from storms and wildfire — but rates haven’t moved.

Insurers and reinsurers have little influence in improving the data they receive.

Over two-thirds of people felt that their coverholders, and in some cases insurers, don’t see the need to collect the necessary data. Even if they do understand the importance and value of the data, they are often unable to enter it into their underwriting systems and pipe it digitally direct to London. Straight-through processing, and the transfer of information from the agent’s desk to the underwriter in London with no manual intervention, is starting to happen, but only the largest or most enlightened coverholders are willing or able to integrate with the systems their carriers are using.

We were joined at our event by Jake Hampton, CEO of Virtual MGA. Jake has been successful in hooking up a handful of companies in London with agents in the U.S. This is creating a far stronger and faster means to define underwriting rules, share data and assess key information such as valuation data. Users of Virtual MGA are able to review the e2Value data to get a second opinion on information submitted from the agent. If there is a discrepancy between the third party data that e2Value (or others) are providing and what their agent provides, the underwriter can either change the replacement value or accept what the agent has provided. A further benefit of the dynamic relationship between agent and underwriter is the removal of the pain of monthly reconciliation. Creating separate updated records of what has been written in the month, known as “bordereau,” is no longer necessary. These can be automatically generated from the system.

Even though e2Value is generating very high success rates for the accuracy of its valuation data, there are times when the underwriter may want to double-check the information with the original insured. In the past, this required a lengthy back and forth discussion over email between the agent and the insured.

JMI Reports is one of the leading provider of surveys in the U.S. Tim McKendry, CEO of JMI, has partnered with e2Value to create an app that provides near-real-time answers to an underwriter’s questions. If there is a query, the homeowner can be contacted by the insurer directly and asked to photograph key details in his home to clarify construction details. This goes directly to the agent and underwriter enabling the accurate and fast assessment of rebuild value.

What about insurtech?

We’ve been hearing a lot in the last few years about how satellites and drones can improve the resolution of data that is available to insurers. But just how good is this data? If insurers in London are struggling to get data direct from their clients, can they, too, access independent sources of data directly? And does the price charged for this data reflect the value an insurer in London can get from it?

Recent entrants, such as Cape Analytics, have also attracted significant amounts of funding. They are increasing the areas of the U.S. where they provide property information derived by satellite images. EagleView has been providing photographs taken from its own aircraft for almost 20 years. CEO Rishi Daga announced earlier this year that their photographs are now 16 times higher-resolution than the best previously available. If you want to know which of your clients has a Weber barbeque in the backyard, EagleView can tell you.

Forbes McKenzie, from McKenzie Insurance Services, knows the London market well. He has been providing satellite data to Lloyd’s of London to assist in claims assessment for a couple of years. Forbes started his career in military intelligence. “The value of information is not just about how accurate it is, but how quickly it can get to the end user,” Forbes says.

See also: How Insurtech Helps Build Trust  

The challenges with data don’t just exist externally. For many insurance companies, the left hand of claims is often disconnected from the right hand of underwriting. Companies find it hard to reconcile the losses they have had with what they are being asked to insure. It’s the curse of inconsistent formats. Claims data lives in one system, underwriting data in another. It’s technically feasible to perform analyses to link the information through common factors such as the address of the location, but it’s rarely cost-effective or practical to do this across a whole book of business.

One of the barriers for underwriters in London in accessing better data is that companies that supply the data, both new and old, don’t always understand how the London market works. Most underwriters are taking small shares of large volumes of individual properties. Each location is a tiny fraction of the total exposure and an even smaller fraction of the incoming premium. Buying data at a cost per location, similar to what a U.S. domestic insurer is doing, is not economically viable.

Price must equal value

Recently, the chief digital officer of a London syndicate traveled to InsureTech Connect in Las Vegas to meet the companies offering exposure data. He is running a POC against a set of standard criteria, looking for new ways to identify and price U.S. properties. He’s already seeing a wide range of approaches to charging. U.K.-based data providers, or U.S. vendors with local knowledge of how the information is being used, tend to be more accommodating to the needs of the London insurers. There is a large potential market for enhanced U.S. property data in London, but the cost needs to reflect the value.

Todd Rissel may have started his career as a surveyor and now be running a long-established company, but he is not shy about working with the emerging companies and doesn’t see them as competition. He has partnerships with data providers such as drone company Betterview to complement and enhance the e2Value data. It is by creating distribution partnerships with some of the newest MGAs and insurers, including market leaders such as Slice and technology providers like Virtual MGA, that e2Value is able to deliver its valuation data to over a third of the companies writing U.S. business.

Looking ahead

It is widely recognized that the London market needs to find ways to meaningfully reduce the cost of doing business. The multiple organizations through which insurance passes, whether brokers, third-party administrators or others, increase the friction and hence cost. Nonetheless, once the risks do find their way to the underwriters, there is a strong desire to find a way to place the business. Short decision chains and a market traditionally characterized by underwriting innovation and entrepreneurial leaders means that London should continue to have a future as the market for specialty property insurance. It’s also a market that prefers to “buy” rather than “build.” London insurers are often among the first to try new technology. The market welcomes partnerships. The coming generation of underwriters understands the value of data and analytics.

The London market cannot, however, survive in a vacuum. Recent history has shown that those companies with a willingness to write property risks with poor data get hit by some nasty, occasionally fatal surprises after major losses. With the increasing focus by the regulator and Lloyd’s own requirements, casual approaches to risk management are no longer tolerated. Startups with large war chests from both U.S. and Asia see an opportunity to displace London.

Despite the fears that data quality is not what it needs to be, our representatives from the London market are positive about the future. Many of them are looking for ways to create stronger links with coverholders in the U.S. Technology is recognized as the answer, and companies are willing to invest to support their partners and increase efficiency in the future. The awareness of new perils such as wildfire and the opening up of the market for flood insurance is creating opportunities.

Our recent workshop was the first of what we expect to be more regular engagements between the underwriters and the providers of property information. If you are interested in learning more about how you can get involved, whether as an underwriter, MGA, provider data, broker or other interested party, let me know.

Why Insurers Caught the Blockchain Bug

In April 2015, Lloyd’s of London launched the Target Operating Model (TOM) project. TOM is a central body responsible for delivering modernization to the still heavily paper-based wholesale insurance transactions in the London insurance markets.

You can state, “I Support TOM,” on a registration site or you can “like” TOM on social media. The project has had several “innovation” events. It has an orange logo reminiscent of the 1990s, when orange was the new black. The project has even tried to coin yet another tech mashup term for the London insurance markets surrounding Lloyd’s: InsTech.

This is not the first time the London insurance markets have tried to modernize. They are serial reformers, and their attempts have had varying degrees of success (from total failure to middling impact).

Limnet (London Insurance Market Network) made progress with electronic data interchange in the 1980s and early 1990s. Electronic Placement Support (EPS) worked in the late 1990s, but few used it. Kinnect, at a cost conservatively quoted as £70 million, was abandoned in 2006. Project Darwin, which operated from 2011 to 2013, achieved little. The Message Exchange Limited (TMEL) is a messaging hub for ACORD messages that has had modest success, but most people still use email.

Numerous private exchanges or electronic messaging ventures have gained only partial market shares. Xchanging Ins-Sure Services (XIS), a claims and premiums processing joint venture, was formed in 2000 and runs adequately but still has a lot of paper involved.

A swift walk round Lloyd’s, perhaps passing by the famous Lamb Tavern in Leadenhall Market, reveals a lot of heavy bundles of paper, lengthening the arms of long-term insurers.

Does ontogeny recapitulate phylogeny?

Ernst Haeckel (1834–1919) was a German biologist and philosopher who proposed a (now largely discredited) biological hypothesis, the “theory of recapitulation.” He proposed that, in developing from embryo to adult, animals go through stages resembling or representing successive stages in the evolution of their remote ancestors. His catchphrase was “ontogeny recapitulates phylogeny.”

In a similar way, TOM seems to be going through all the previous stages of former wholesale insurance modernization projects, databases, networks and messaging centers, but it may come out at the end to realize the potential of mutual distributed ledgers (aka blockchain technology).

Information technology systems may have now evolved to meet the demanding requirements of wholesale insurance. And wholesale insurance differs from capital market finance in some important ways.

First, insurance is a “promise to pay in future,” not an asset transfer today. Second, while capital markets trade on information asymmetry, insurance is theoretically a market of perfect information and symmetry—you have to reveal everything of possible relevance to your insurer, but each of you has different exposure positions and interpretations of risk. Third, wholesale insurance is “bespoke.” You can’t give your insurance cover to someone else.

These three points lead to a complex set of interactions among numerous parties. Clients, brokers, underwriters, claims assessors, valuation experts, legal firms, actuaries and accountants all have a part in writing a policy, not to mention in handling subsequent claims.

People from the capital markets who believe insurance should become a traded market miss some key points. Let’s examine two: one about market structure, and one about technology.

TIn terms of market structure: People use trusted third parties in many roles—in finance, for settlement, as custodians, as payment providers and as poolers of risk. Trusted third parties perform three roles, to:

  • Validate — confirming the existence of something to be traded and the membership of the trading community
  • Safeguard — preventing duplicate transactions, i.e. someone selling the same thing twice or “double-spending”
  • Preserve — holding the history of transactions to help analysis and oversight and in the event of disputes.

Concerns over centralization

The hundreds of firms in the London markets are rightly concerned about a central third party that might hold their information to ransom. The firms want to avoid natural monopolies, particularly as agreed information is crucial over multi-year contracts. They are also concerned about a central third party that must be used for messaging because, without choice, the natural monopoly rents might become excessive.

Many historic reforms failed to propose technology that recognized this market structure. Mutual distributed ledgers (MDLs), however, provide pervasive, persistent and permanent records. MDL technology securely stores transaction records in multiple locations with no central ownership. MDLs allow groups of people to validate, record and track transactions across a network of decentralized computer systems with varying degrees of control of the ledger. In such a system, everyone shares the ledger. The ledger itself is a distributed data structure, held in part or in its entirety by each participating computer system. Trust in safeguarding and preservation moves from a central third-party to the technology.

Emerging techniques, such as smart contracts and decentralized autonomous organizations, might, in the future, also permit MDLs to act as automated agents.

Beat the TOM-TOM

Because MDLs enable organizations to work together on common data, they exhibit a paradox. MDLs are logically central but are technically distributed. They act as if they are central databases, where everyone shares the same information.

However, the information is distributed across multiple (or multitudinous) sites so that no one person can gain control over the value of the information. Everyone has a copy. Everyone can recreate the entire market from someone else’s copy. However, everyone can only “see” what their cryptographic keys permit.

How do we know this works? We at Z/Yen, a commercial think tank, have built several insurance application prototypes for clients who seek examples, such as motor, small business and insurance deal-rooms. The technical success of blockchain technologies in cryptocurrencies—such as Bitcoin, Ethereum and Ripple—have shown that complex multi-party transactions are possible using MDLs. And, we have built a system that handles ACORD messages with no need for “messaging.”

Z/Yen’s work in this space dates to 1995. Until recently, though, most in financial services dismissed MDLs as too complex and insecure. The recent mania around cryptocurrencies has led to a reappraisal of their potential, as blockchains are just one form of MDL. That said, MDLs are “mutual,” and a number of people need to move ahead together. Further, traditional commercial models of controlling and licensing intellectual property are less likely to be successful at the core of the market. The intellectual property needs to be shared.

A message is getting out on the jungle drums that MDLs, while not easy, do work at a time when people are rethinking the future of wholesale insurance.

If TOM helps push people to work together, perhaps, this time, market reform will embrace a generation of technology that will finally meet the demands of a difficult, yet essential and successful, centuries-old market.

Perhaps TOM should be beating the MDL drums more loudly.