Tag Archives: acord

blockchain

Why Insurers Caught the Blockchain Bug

In April 2015, Lloyd’s of London launched the Target Operating Model (TOM) project. TOM is a central body responsible for delivering modernization to the still heavily paper-based wholesale insurance transactions in the London insurance markets.

You can state, “I Support TOM,” on a registration site or you can “like” TOM on social media. The project has had several “innovation” events. It has an orange logo reminiscent of the 1990s, when orange was the new black. The project has even tried to coin yet another tech mashup term for the London insurance markets surrounding Lloyd’s: InsTech.

This is not the first time the London insurance markets have tried to modernize. They are serial reformers, and their attempts have had varying degrees of success (from total failure to middling impact).

Limnet (London Insurance Market Network) made progress with electronic data interchange in the 1980s and early 1990s. Electronic Placement Support (EPS) worked in the late 1990s, but few used it. Kinnect, at a cost conservatively quoted as £70 million, was abandoned in 2006. Project Darwin, which operated from 2011 to 2013, achieved little. The Message Exchange Limited (TMEL) is a messaging hub for ACORD messages that has had modest success, but most people still use email.

Numerous private exchanges or electronic messaging ventures have gained only partial market shares. Xchanging Ins-Sure Services (XIS), a claims and premiums processing joint venture, was formed in 2000 and runs adequately but still has a lot of paper involved.

A swift walk round Lloyd’s, perhaps passing by the famous Lamb Tavern in Leadenhall Market, reveals a lot of heavy bundles of paper, lengthening the arms of long-term insurers.

Does ontogeny recapitulate phylogeny?

Ernst Haeckel (1834–1919) was a German biologist and philosopher who proposed a (now largely discredited) biological hypothesis, the “theory of recapitulation.” He proposed that, in developing from embryo to adult, animals go through stages resembling or representing successive stages in the evolution of their remote ancestors. His catchphrase was “ontogeny recapitulates phylogeny.”

In a similar way, TOM seems to be going through all the previous stages of former wholesale insurance modernization projects, databases, networks and messaging centers, but it may come out at the end to realize the potential of mutual distributed ledgers (aka blockchain technology).

Information technology systems may have now evolved to meet the demanding requirements of wholesale insurance. And wholesale insurance differs from capital market finance in some important ways.

First, insurance is a “promise to pay in future,” not an asset transfer today. Second, while capital markets trade on information asymmetry, insurance is theoretically a market of perfect information and symmetry—you have to reveal everything of possible relevance to your insurer, but each of you has different exposure positions and interpretations of risk. Third, wholesale insurance is “bespoke.” You can’t give your insurance cover to someone else.

These three points lead to a complex set of interactions among numerous parties. Clients, brokers, underwriters, claims assessors, valuation experts, legal firms, actuaries and accountants all have a part in writing a policy, not to mention in handling subsequent claims.

People from the capital markets who believe insurance should become a traded market miss some key points. Let’s examine two: one about market structure, and one about technology.

TIn terms of market structure: People use trusted third parties in many roles—in finance, for settlement, as custodians, as payment providers and as poolers of risk. Trusted third parties perform three roles, to:

  • Validate — confirming the existence of something to be traded and the membership of the trading community
  • Safeguard — preventing duplicate transactions, i.e. someone selling the same thing twice or “double-spending”
  • Preserve — holding the history of transactions to help analysis and oversight and in the event of disputes.

Concerns over centralization

The hundreds of firms in the London markets are rightly concerned about a central third party that might hold their information to ransom. The firms want to avoid natural monopolies, particularly as agreed information is crucial over multi-year contracts. They are also concerned about a central third party that must be used for messaging because, without choice, the natural monopoly rents might become excessive.

Many historic reforms failed to propose technology that recognized this market structure. Mutual distributed ledgers (MDLs), however, provide pervasive, persistent and permanent records. MDL technology securely stores transaction records in multiple locations with no central ownership. MDLs allow groups of people to validate, record and track transactions across a network of decentralized computer systems with varying degrees of control of the ledger. In such a system, everyone shares the ledger. The ledger itself is a distributed data structure, held in part or in its entirety by each participating computer system. Trust in safeguarding and preservation moves from a central third-party to the technology.

Emerging techniques, such as smart contracts and decentralized autonomous organizations, might, in the future, also permit MDLs to act as automated agents.

Beat the TOM-TOM

Because MDLs enable organizations to work together on common data, they exhibit a paradox. MDLs are logically central but are technically distributed. They act as if they are central databases, where everyone shares the same information.

However, the information is distributed across multiple (or multitudinous) sites so that no one person can gain control over the value of the information. Everyone has a copy. Everyone can recreate the entire market from someone else’s copy. However, everyone can only “see” what their cryptographic keys permit.

How do we know this works? We at Z/Yen, a commercial think tank, have built several insurance application prototypes for clients who seek examples, such as motor, small business and insurance deal-rooms. The technical success of blockchain technologies in cryptocurrencies—such as Bitcoin, Ethereum and Ripple—have shown that complex multi-party transactions are possible using MDLs. And, we have built a system that handles ACORD messages with no need for “messaging.”

Z/Yen’s work in this space dates to 1995. Until recently, though, most in financial services dismissed MDLs as too complex and insecure. The recent mania around cryptocurrencies has led to a reappraisal of their potential, as blockchains are just one form of MDL. That said, MDLs are “mutual,” and a number of people need to move ahead together. Further, traditional commercial models of controlling and licensing intellectual property are less likely to be successful at the core of the market. The intellectual property needs to be shared.

A message is getting out on the jungle drums that MDLs, while not easy, do work at a time when people are rethinking the future of wholesale insurance.

If TOM helps push people to work together, perhaps, this time, market reform will embrace a generation of technology that will finally meet the demands of a difficult, yet essential and successful, centuries-old market.

Perhaps TOM should be beating the MDL drums more loudly.

5 Practical Steps to Get to Self-Service

To participate in the new world of customer self-service and straight-through processing, many insurance carriers find themselves having to deal with decades of information neglect. As insurers take on the arduous task of moving from a legacy to a modernized information architecture and platform, they face many challenges.

I’ll outline some of the common themes and challenges, possible categories of solutions and practical steps that can be taken to move forward.

Let’s consider the case of Prototypical Insurance Company (PICO), a mid-market, multiline property/casualty and life insurance carrier, with regional operations. PICO takes in $700 million in direct written premiums from 600,000 active policies and contracts. PICO’s customers want to go online to answer basic questions, such as “what’s my deductible?”; “when is my payment due?”; “when is my policy up for renewal?”; and “what’s the status of my claim?” They also want to be able to request policy changes, view and pay their bills online and report claims.

After hearing much clamoring, PICO embarks on an initiative to offer these basic self-service capabilities.

As a first step, PICO reviews its systems landscape. The results are not encouraging. PICO finds four key challenges.

1. Customer data is fragmented across multiple source systems.

Historically, PICO has been using several policy-centric systems, each catering to a particular line of business or family of products. There are separate policy administration systems for auto, home and life. Each system holds its own notion of the policyholder. This makes developing a unified customer-centric view extremely difficult.

The situation is further complicated because the level and amount of detail captured in each system is incongruent. For example, the auto policy system has lots of details about vehicles and some details about drivers, while the home system has very little information about the people but a lot of details about the home. Thus, choices for key fields that can be used to match people in one system with another are very limited.

2. Data formats across systems are inconsistent.

PICO has been operating with systems from multiple vendors. Each vendor has chosen to implement a custom data representation, some of which are proprietary. To respond to evolving business needs, PICO has had to customize its systems over the years. This has led to a dilution of the meaning and usage of data fields: The same field represents different data, depending on the context.

3. Data is lacking in quality.

PICO has business units that are organized by line of business. Each unit holds expertise in a specific product line and operates fairly autonomously. This has resulted in different practices when it comes to data entry. The data models from decades-old systems weren’t designed to handle today’s business needs. To get around that, PICO has used creative solutions. While this creativity has brought several points of flexibility in dealing with an evolving business landscape, it’s at the cost of increased data entropy.

4. Systems are only available in defined windows during the day, not 24/7.

Many of PICO’s core systems are batch-oriented. This means that updates made throughout the day are not available in the system until after-hours batch processing has completed. Furthermore, while the after-hours batch processing is taking place, the systems are not available, neither for querying nor for accepting transactions.

Another aspect affecting availability is the closed nature of the systems. Consider the life policy administration system. While it can calculate cash values, loan amounts, accrued interest and other time-sensitive quantities, it doesn’t offer these capabilities through any programmatic application interface that an external system could use to access these results.

These challenges will sound familiar to many mid-market insurance carriers, but they’re opportunities in disguise. The opportunity to bring to bear proven and established patterns of solutions is there for the taking.

FOUR SOLUTION PATTERNS

There are four solution patterns that are commonly used to meet these challenges: 1) establishing a service-oriented architecture; 2) leveraging a data warehouse; 3) modernizing core systems; and 4) instituting a data management program. The particular solution a carrier pursues will ultimately depend on its individual context.

1. Service-oriented architecture

SOA consists of independent, message-based, contract-driven and, possibly, asynchronous services that collaborate. Creating such an architecture in a landscape of disparate systems requires defining:

  • Services that are meaningful to the business: for instance, customer, policy, billing, claim, etc.
  • Common formats to represent business data entities.
  • Messages and message formats that represent business transactions (operations on business data).
  • Contracts that guide interactions between the business services.

Organizations such as Object Management Group and ACORD have made a lot of headway toward offering industry-standard message formats and data models.

After completing the initial groundwork, the next step is to enable existing systems to exchange defined messages and respond to them in accordance with the defined contracts. Simple as it might sound, this so-called service-enablement of existing systems is often not a straightforward step. Success here is heavily dependent on how well the technologies behind the existing systems lend themselves to service enablement. An upfront assessment would be entirely warranted.

Assuming service enablement is possible, we’re still not in the clear. SOA only helps address issues of data format inconsistencies and data fragmentation. It will not help with issues of data quality and can offer only limited reprieve from unavailability of systems. Unless those can be addressed in concert, this approach will only provide limited success.

2. Data warehouse

A data warehouse is a data store that accumulates data from a wide range of sources within an organization and is ultimately used to guide decision-making. While using a data warehouse as the basis of an operational system (such as customer self-service) is a choice, it is really a false choice for a couple of different reasons.

    • Building a data warehouse is a big effort. Insurers usually can’t wait for its completion. They have to move ahead with self-service now.
    • Data warehouses are meant to power business intelligence, not operational systems. If the warehouse already exists, there’s a 50% chance that it was built on a dimensional model. A dimensional model does not lend itself to serving as a source for downstream operational systems. On the other hand, if it’s a “single version of truth” warehouse, the company is well on its way to addressing the data challenges under discussion.

3. Modernizing core systems

Modern systems make self-service relatively simple. However, unless modernization is already well underway, it, too, cannot be waited for, because implementation timeframes are so long.

4. Instituting a data management program

A data management program is a solution that deals with specific data challenges, not the foundational reasons behind those challenges. To overcome the four challenges mentioned at the beginning of the article, a program could consist of a consolidated data repository implemented using a canonical data model on top of a highly available systems architecture leveraging data quality tools at key junctions. Implementing such a program would be much quicker than the previous three options. Furthermore, it can serve as an intermediate step toward each of the previous three options.

As an intermediate step, it has a risk-mitigation quality that’s particularly appealing to mid-sized organizations.

The particular solution a carrier pursues will ultimately depend on its individual context. In the final part of this series, we’ll discuss practical steps that a carrier can take towards instituting its own data management program.

PRACTICAL STEPS

Here are the practical steps that a carrier can take toward instituting its own data management program that can successfully support customer self-service. The program should have the following five characteristics:

1. A consolidated data repository

The antidote to data fragmentation is a single repository that consolidates data from all systems that are a primary source of customer data. For the typical carrier, this will include systems for quoting, policy administration, CRM, billing and claims. A consolidated repository results in a replicated copy of data, which is a typical allergy of traditional insurance IT departments. Managing the data replication through defined ETL processes will often preempt the symptoms of such an allergy.

2. A canonical data model

To address inconsistencies in data formats used within the primary systems, the consolidated data repository must use a canonical data model. All data feeding into the repository must conform to this model. To develop the data model pragmatically, simultaneously using both a top-down and a bottom-up approach will provide the right balance between theory and practice. Industry-standard data models developed by organizations such as the Object Management Group and ACORD will serve as a good starting point for the top-down analysis. The bottom-up analysis can start from existing source system data sets.

3. “Operational Data Store” mindset — a Jedi mind trick

Modern operational systems often use an ODS to expose their data for downstream usage. The typical motivation for this is to eliminate (negative) performance impacts of external querying while still allowing external querying of data in an operational (as opposed to analytical) format. Advertising the consolidated data repository built with a canonical data model as an ODS will shift the organizational view of the repository from one of a single-system database to that of an enterprise asset that can be leveraged for additional operational needs. This is the data management program’s equivalent of a Jedi mind trick!

4. 24/7/365 availability

To adequately position the data repository as an enterprise asset, it must be highly available. For traditional insurance IT departments, 24/7/365 availability might be a new paradigm.

Successful implementations will require adoption of patterns for high availability at multiple levels. At the infrastructure level, useful patterns would include clustering for fail-over, mirrored disks, data replication, load balancing, redundancy, etc.

At the SDLC level, techniques such as continuous integration, automated and hot deployments, automated test suites, etc. will prove to be necessary. At the integration architecture level (for systems needing access to data in the consolidated repository), patterns such as asynchronicity, loose coupling, caching, etc., will need to be followed.

5. Encryption of sensitive data

Once data from multiple systems is consolidated into a single repository, the impact of a potential breach in security will be amplified several-fold – and breaches will happen; it’s only a matter of time, be they internal or external, innocent or malicious. To mitigate some of that risk, it’s worthwhile to invest in infrastructure level encryption (options are available in each of the storage, database and data access layers) of, at a minimum, sensitive data.

A successful data management program spans several IT disciplines. To ensure coherency across all of them, oversight from a versatile architect capable of conceiving infrastructure, data and integration architectures will prove invaluable.

Insurance And Manufacturing: Lessons In Software, Systems, And Supply Chains

Recently, my boss Steve and I were talking about his early career days with one of those Big 8, then Big 6, then Big 5, then Big 4 intergalactic consulting firms. Steve came out of college with an engineering degree, so it was natural to start in the manufacturing industry. Learning about bills of material, routings, design engineering, CAD/CAM … “Ah yes,” he recalled, “Those were heady days.” And all those vendor-packaged manufacturing ERP systems that were starting to take the market by storm.

Eventually Steve found his way into the insurance industry, and thus began our discussion. One of the first things that struck Steve was the lack of standard software packages in the insurance industry. I don’t mean the lack of software vendors — there are plenty of those. Seemingly, though, each software solution was a one-off. Or custom. Or some hybrid combination. “Why?” we wondered.

The reasons, as we now know, were primarily reflected in an overall industry mindset:

  • A “but we are unique!” attitude was pervasive. Companies were convinced that if they all used the same software, there would be little to differentiate themselves from one another.
  • There was also an accepted industrywide, one-off approach. Conversations went something like this: “XYZ is our vendor. We really don’t like them. Taking new versions just about kills us. We don’t know why we even pay for maintenance, but we do.”

But the chief reason for a lack of standard software was the inability to separate product from process. What does this mean?

Well, you can certainly envision that your auto product in Minnesota is handled differently than your homeowners’ product in California. I’m not referring to just the obvious elements (limits, deductibles, rating attributes), but also the steps required for underwriting, renewal, and cancellation. Separation of product from process must go beyond the obvious rate/rule/form variations to also encompass internal business and external compliance process variations.

But there’s still plenty of processing — the heavy lifting of transaction processing — that’s the same and does not vary. For example, out-of-sequence endorsement processing is not something that makes a company unique and therefore would not require a custom solution.

Where the rubber meets the road, and where vendor packages have really improved their architecture over the last several years, is by providing the capability in their policy admin systems for companies to “drop” very specific product information, along with associated variations, into a very generic transaction system.

Once product “components” (digitized) are separated from the insurance processing engine, and once companies have a formal way to define them (standard language), they can truly start making their products “unique” with reuse and mass customization. Much like those manufacturing bills of material and routings looked to Steve way back when.

This separation of policy from product has been a key breakthrough in insurance software. So what is an insurance product, at least in respect to systems automation?

From Muddled To Modeled
The typical scenario to avoid goes something like this:

  • The business people pore over their filings and manuals and say, “This is the product we sell and issue.”
  • The IT people pore over program code and say, “That’s the product we have automated.”
  • The business people write a lot of text in their word processing documents. They find a business analyst to translate it into something more structured, but still text.
  • The business analyst finds a designer to make the leap from business text to IT data structures and object diagrams.
  • The designer then finds a programmer to turn that into code.

One version of the truth? More like two ships passing, and it’s more common than you may think. How can organizations expect success when the product development process is not aligned? Without alignment, how can organizations expect market and compliance responsiveness?

What’s the alternative? It revolves around an insurance “product model.” Much like general, industry-standard data models and object models, a product model uses a precise set of symbols and language to define insurance product rates, rules, and forms — the static or structural parts of an insurance product. In addition, the product model must also define the actions that are allowed to be taken with the policy during the life of the contract — the dynamic or behavioral aspect of the product model. So for example, on a commercial auto product in California, the model will direct the user to attach a particular form (structure) for new business issuance only (actions).

Anyone familiar with object and data modeling knows there are well-defined standards for these all-purpose models. For insurance product modeling, at least currently, such standards are more proprietary, such as IBM’s and Camilion’s models, and of course there are others. It is interesting to note that ACORD now has under its auspices the Product Schema as the result of IBM’s donation of aspects of IAA. Might this lead to more industry standardization?

With product modeling as an enabler, there’s yet another key element to address. Yes, that would be the product modelers — the people responsible for making it work. Product modeling gives us the lexicon or taxonomy to do product development work, but who should perform that work? IT designers with sound business knowledge? Business people with analytical skills? Yes and yes. We must finally drop the history of disconnects where one side of the house fails to understand the other.

With a foundation of product modeling and product modelers in place, we can move to a more agile or lean product life cycle management approach — cross-functional teams versus narrow, specialized skills; ongoing team continuity versus ad hoc departmental members; frequent, incremental product improvements versus slow, infrequent, big product replacements.

It all sounds good, but what about the product source supplier — the bureaus?

Supply Chain: The Kinks In Your Links
Here is where the comparison between insurance and manufacturing takes a sharp turn. In their pursuit of quality and just-in-time delivery, manufacturers can make demands on their supply chain vendors. Insurance companies, on the other hand, are at the mercy of the bureaus. ISO, NCCI, and AAIS all develop rates, rules, and forms, of course. They then deliver these updates to their member subscribers via paper manuals or electronically via text.

From there the fun really begins. Insurance companies must log the info, determine which of their products and territories are impacted, compare the updates to what they already have implemented and filed, conduct marketing and business reviews, and hopefully and eventually, implement at least some of those updates.

Recent studies by Novarica and SMA indicate there are approximately 3,000 to 4,000 changes per year in commercial lines alone. The labor cost to implement just one ISO circular with a form change and a rate change is estimated to be $135,000, with the majority of costs in the analysis and system update steps.

There has got to be a better way …

ISO at least has taken a step in right direction with the availability of its Electronic Rating Content. In either Excel or XML format, ISO interprets its own content to specify such constructs as premium calculations (e.g., defined order of calculation, rounding rules), form attachment logic (for conditional forms), and stat code assignment logic (to support the full plan).

A step in the right direction, no doubt. But what if ISO used a standard mechanism and format to do this? ACORD now has under its control the ACORD Product Schema. This is part of IBM’s fairly recent IAA donation. It provides us a standard way to represent the insurance product and a standard way to integrate with policy admin systems. What if ISO and the other key providers in the product supply chain started it all off this way?

Dream on, you say? While you may not have the clout to demand that the bureaus change today, you do pay membership fees, and collectively the members have a voice in encouraging ongoing improvements in the insurance “supply chain.”

In the meantime, the goal to be lean and agile with product life cycle management continues. We must respond quickly and cost-effectively to market opportunities, policyholder feedback, and regulatory requirements. That all starts at the product source … but it doesn’t end there. So while the supply chain improves its quality and delivery, insurance companies will need to gain efficiencies throughout every corner of their organizations in order to achieve those lean goals.

In writing this article, David collaborated with his boss Steve Kronsnoble. Steve is a senior manager at Wipfli and an expert in the development, integration, and management of information technology. He has more than 25 years of systems implementation experience with both custom-developed and packaged software using a variety of underlying technologies. Prior to Wipfli, Steve worked for a major insurance company and leverages that experience to better serve his clients.