Tag Archives: data migration

Keys to Migrating Policy Data

Insurance policy data migration, namely the process by which insurance policies are transferred from one or more source systems to one or more target systems, can be of strategic importance to insurers undergoing core policy system transformation for a number of reasons:

  1. When a legacy policy system is replaced with a net new policy platform, the new platform may deliver additional functionality, but it is the policy data migration that has the potential to deliver mature, profitable books of business to the new platform. As such, in many cases, the underlying business case for a core policy system replacement relies heavily on successful policy data migration.
  2. Legacy policy system decommissioning relies on successful data migration; it is not possible to decommission a policy system until all significant books of business have been migrated off it. As a consequence, building a strong data migration capability able to quickly migrate books of business off legacy policy systems cuits the cost of running the IT estate, freeing funds and resources for other value add activities.
  3. Unsuccessful data migration carries significant downside: poor data quality of migrated policies, lost-in-translation policies that are extracted from the source system but that never land in the target system and data breaches related to migrated records. It is in the interest of insurers to invest in data migration upfront to get it right the first time around.

See also: 4 Steps to Ease Data Migration  

Although each policy data migration has its own characteristics, most share the following architectural components:

  • Source System/s: the source system/s from which policies are being migrated.
  • Source System Extract Engine: component extracting policies from the source system/s.
  • Transform Engine: component transforming the policies extracted from the source system into a format that can be accepted by the target system/s.
  • Target System/s: the target system/s into which policies are being migrated.
  • Load Engine: component committing the output of the Transform Engine into the target system/s.
  • Reconciliation Solution: component counting migrated records at key steps in the policy data migration flow, to ensure that all extracted policies ultimately land in the target system/s.

Below are some points to consider when designing insurance policy data migration, paired with some experience-based points of view.

1. Target system/s and architecture should be built so that policy data migration can be effective:

  • When designing the target policy system, and all the systems around it, the impact of solution decisions on data migration should be considered. For example, a decision to introduce a net new solution to handle party-centricity may have a limited impact on new business flows, but a significant impact on data migration. If the data migration impact is disregarded, then the project may find further down the line that data migration is prohibitively complex to perform because of the party-centricity solution.

2. Policy data migration should be considered at a company level, rather than at a divisional level:

  • Within insurance companies, in many cases books of business residing on source systems relate to multiple divisions. For example, source systems X, Y and Z may each have books of business from retail, commercial and specialty. When this is the case, it is important to ensure that data migration is considered at a company level, to ensure that books of business are migrated in such a way that decommissioning is feasible. Otherwise, the risk is that policy data migration programs driven by a single division may fail to capture the value derived from decommissioning source systems, as on each source system there may be books of business other than those that the specific division is migrating.

3. The pipeline of policy data migrations should be developed in parallel to the timelines for core policy system replacement:

  • A common error related to policy data migration planning is to delay it until after the detailed planning for the core policy system replacement program is complete. The risk is that only too late will it become apparent that the rate at which books of policies can be migrated is insufficient to complete the data migration within the program timelines. In some cases, the rate may be so slow, for example no more than two books of business per calendar year, that it is not even possible to migrate all books from source to target before the target itself becomes legacy and is replaced.

4. Migration reconciliation should be catered for with either an automated or manual solution:

  • Reconciliation with regard to policy data migration entails two elements: firstly, counting that all the records that are extracted are transformed, and that all the records that are transformed are then loaded into the target architecture, and secondly, determining what has happened to dropped records. For example, if during a data migration cycle 100.000 records should be extracted, transformed and loaded, but only 99,980 are extracted, 99,960 are transformed and 99,940 are loaded, it is the reconciliation solution that should highlight that 20 records have been dropped at each step, and that should indicate what has happened to each of the 60 records.

5. There is significant benefit in defining the goals of each policy data migration in a single sentence:

  • Not everyone is familiar with data migration, so defining what data migration is looking to achieve in a concise manner provides a clear platform to engage non data migration stakeholders. Below is a template sentence using letters to highlight key data migration elements:
    • The data migration for book of business X needs to migrate Y records from source system M into target system Z at a load success percent of T at a rate of R records per second.

6. The decision on whether to migrate policies as quotes or as live policies should be made early in the solution process:

  • When performing policy data migration, the options are to migrate policies into the target system/s as live policies, or as quotes that then convert into live policies a number of days before their renewal date. The advice is that unless there are strong reasons to migrate records as live policies, it is best to migrate as quotes that then convert into policies, as loading as live policies introduces complexity around the live elements of the policies, such as billing accounts.

7. Policy data migration performance implications should be considered upfront:

  • Where the data migration components use transactions that are either shared with new business, or particularly complex and performance-heavy, it is important to ensure that performance implications are considered. For example, a transaction that is used 10 times per minute in new business may be used 1,000 times per minute in data migration, which may cause contention issues. Consequently, both the target architecture and the data migration solution should be designed to avoid performance bottlenecks.

See also: The Rise of Big (Bad) Data  

Key takeaways:

  • Policy data migration in insurance has the potential to drive significant business value in that it can deliver mature books of business to net new policy handling systems, and it may allow decommissioning of legacy systems.
  • Most policy data migrations share common components, namely Source System/s, Source System/s Extract Engine, Transform Engine, Target System/s, Load Engine and Reconciliation Solution.
  • Designing a policy data migration is not simple; leveraging insights from previous data migrations may be the difference between success and failure.

4 Steps to Ease Data Migration

Mobile has been a huge change agent for technology across all industries, and insurance is no different. The demands of today’s customer have dictated that legacy systems with siloed data be re-examined and replaced with modern, digital-ready solutions. With 64% of insurance employees willing to use a mobile app or site to improve access to sales information in the field, data must be accessible across all channels of an organization.

Migrating data from siloed, disconnected systems to a new cutting-edge platform is no easy task. A successful data migration requires extensive preparation, custom software architecture and knowledge of both old and new systems. Before moving to new business software systems, developing a plan to streamline the transition and to prevent major hiccups is imperative.

Get Rid of Unnecessary Data

The longer that insurance providers have been in business, the more data there is to deal with. Additionally, legacy data that is no longer necessary for current business operations accumulates for a variety of reasons. Before moving to a new house, most owners take the opportunity to clean the attic and get rid of items that have accumulated and are no longer needed; the same idea applies when migrating data from a legacy system to a new solution. When preparing to migrate, insurance providers should take a close look at data in legacy systems and only migrate data that is necessary for today’s business operations.

See also: Why Exactly Does Big Data Matter?  

Rethink How to Map Data to New Systems

Demand for real-time data access and frictionless user experiences has led many digital-ready software solutions down different paths than those taken by legacy systems. Both the underlying technology that stores the data and how the data is structured relationally is very different from what was seen decades ago when insurance providers were developing their first software systems. Insurers need to account for these changes while helping data make the transition to restructure it to best suit today’s needs. Because this process is time-consuming and complex, many providers benefit from third-party digital transformation partners whose expertise can be leveraged to provide a best-of-breed solution.

Use Out-of-the-Box Best Practices

Legacy systems have been unable to adapt to business processes that have evolved dramatically in recent years, meaning many insurance providers have been forced to rely on outdated practices. When rolling over to a new system, starting with the best practices contained in the new software and modifying only when necessary is a key best practice. The software vendor has evolved its out-of-the-box processes over the years, and they should be considered best-of-breed. As a rule of thumb, an insurance provider should only modify these processes because of some unique part of the business that is key to its strategic goals. Migrating old processes or modifying recommended ones can be inefficient and costly, so providers shouldn’t throw away time-tested solutions unless there’s a critical strategic reason to do so.

Plan Ahead for a Smooth Rollout

Making the transition to modern software solutions can easily derail day-to-day operations if providers don’t take the right precautions. To make sure the rollout is seamless, providers must develop a strategy to “keep the lights on” that often involves mixed-mode operations between the legacy systems and the new digital solutions. This strategy involves a great deal of up-front architectural planning to ensure the data is in sync between the old and new systems, along with a road map that sunsets the legacy solution piece-by-piece in a surgical fashion. A smooth rollout also requires training on buy-in and training on how to use the new technology, which, along with an extensive beta testing phase to gather feedback, are wise investments to the overall long-term success of the rollout.

See also: 3 Types of Data for Personalization  

Migration of mission-critical legacy business systems to modern digital-ready solutions is not easy. Up-front planning for data migration, training, business rules, process, architectural evolution and both short-term and near-term business goals are all key considerations that should factor into the overall migration road map. Rushing a digital transformation effort can result in incredibly costly mistakes down the line, so insurance providers need proper planning to ensure a smooth transition. Providers should strongly consider enlisting the help of a partner that is skilled in digital transformation to help guide them along the journey.

Disjointed Reinsurance Systems: A Recipe for Disaster

Insurers’ numerous intricate reinsurance contracts and special pool arrangements, countless policies and arrays of transactions create a massive risk of having unintended exposure. The inability to ensure that each insured risk has the appropriate reinsurance program associated with it is a recipe for disaster.

Having disjointed systems—a combination of policy administration system (PAS) and spreadsheets, for example—or having systems working in silos are sure ways of having risks fall through the cracks. The question is not if it will happen but when and by how much.

Beyond excessive risk exposure, the risks are many: claims leakage, poor management of aging recoverables and lack of business intelligence capabilities. There’s also the likelihood of not being able to track out-of-compliance reinsurance contracts. For instance, if a reinsurer requires certain exclusion in the policies it reinsures and the direct writer issues the policy without the exclusion, then the policy is out of compliance, and the reinsurer may deny liability.

The result is unreliable financial information for trends, profitability analysis and exposure, to name a few.

Having fragmented solutions and manual processes is the worst formula when it comes to audit trails. This is particularly troubling in an age of stringent standards in an increasingly internationally regulated industry. Integrating the right solution will help reduce risks to an absolute minimum.

Consider vendors offering dedicated and comprehensive systems as opposed to policy administration system vendors, which may simply offer “reinsurance modules” as part of all-encompassing systems. Failing to pick the right solution will cost the insurer frustration and delays by attempting to “right” the solution through a series of customizations. This will surely lead to cost overruns, a lengthy implementation and an uncertain outcome. An incomplete system will need to be customized by adding missing functions.

Common system features a carrier should look out for are:
  • Cession treaties and facultative management
  • Claims and events management
  • Policy management
  • Technical accounting (billing)
  • Bordereaux/statements
  • Internal retrocession
  • Assumed and retrocession operations
  • Financial accounting
  • AP/AR
  • Regulatory reporting
  • Statistical reports
  • Business intelligence
Study before implementing

Picking the right solution is just the start. Implementing a new solution still has many pitfalls. Therefore, the first priority is to perform a thorough and meticulous preliminary study.

The study is directed by the vendor, similar to an audit through a series of meetings and interviews with the different stakeholders: IT, business, etc. It typically lasts one to three weeks depending on the complexity of the project. A good approach is to spend a half-day conducting the scheduled meeting(s) and the other half drafting the findings and submitting them for review the following day.

The study should at least contain the following:

  • A detailed report on the company’s current reinsurance management processes.
  • A determination of potential gaps between the carrier reinsurance processes and the target solution.
  • A list of contracts and financial data required for going live.
  • Specifications for the interfaces.
  • Definitions of the data conversion and migration strategy.
  • Reporting requirements and strategy.
  • Detailed project planning and identification of potential risks.
  • Repository requirements.
  • Assessment and revision of overall project costs.
Preliminary study/(gap analysis) sample:

1. Introduction
  • General introduction and description of project objectives and stakeholders
  • What’s in and out of scope
2. Description of current business setting

3. Business requirements

  • Cession requirements
  • Assumed and retrocession requirements
4. Systems Environment Topics
  • Interfaces/hardware and software requirements
5. Implementation requirements
6. System administration
  • Access, security, backups
7. Risks, pending issues and assumptions
8. Project management plan

The preliminary study report must be submitted to each stakeholder for review and validation as well as endorsement by the head of the steering committee of the insurance company before the start of the project. If necessary, the study should be revised until all parts are adequately defined. Ideally, the report should be used as a road map by the carrier and vendor.

All project risks and issues identified at this stage will be incorporated into the project planning. It saves much time and money to discover them before the implementation phase. One of the main reasons why projects fail is poor communication. Key people on different teams need to actively communicate with each other. There should be at  least one person from each invested area—IT, business and upper management must be part of a well-defined steering committee.

A clear-cut escalation process must be in place to tackle any foreseeable issues and address them in a timely manner.

A Successful Implementation Process
Key areas and related guidelines that are essential to successfully carry out a project.

Data cleansing
Before migration, an in-depth data scrubbing or cleansing is recommended. This is the process of amending or removing data derived from the existing applications that is erroneous, incomplete, inadequately formatted or replicated. The discrepancies discovered or deleted may have been originally produced by user-entry errors or by corruption in transmission or storage.

Data cleansing may also include actions such as harmonization of data, which relates to identifying commonalities in data sets and combining them into a single data component, as well as standardization of data, which is a means of changing a reference data set to a new standard—in other words, use of standard codes.

Data migration

Data migration pertains to the moving of data between the existing system (or systems) and the target application as well as all the measures required for migrating and validating the data throughout the entire cycle. The data needs to be converted so that it’s compatible with the reinsurance system before the migration can take place.

It’s a mapping of all the data with business rules and relevant codes attached to it; this step is required before the automatic migration can take place.

An effective and efficient data migration effort involves anticipating potential issues and threats as well as opportunities, such as determining the most suitable data-migration methodology early in the project and taking appropriate measures to mitigate potential risks. Suitable data migration methodology differs from one carrier to another based on its particular business model.

Analyze and understand the business requirements before gathering and working on the actual data. Thereafter, the carrier must delineate what needs to be migrated and how far back. In the case of long-tail business, such as asbestos coverage, all the historical data must be migrated. This is because it may take several years or decades to identify and assess claims.

Conversely, for short-tail lines, such as property fire or physical auto damage, for which losses are usually known and paid shortly after the loss occurs, only the applicable business data is to be singled out for migration.

A detailed mapping of the existing data and system architecture must be drafted to isolate any issues related to the conversion early on. Most likely, workarounds will be required to overcome the specificities or constraints of the new application. As a result, it will be crucial to establish checks and balances or guidelines to validate the quality and accuracy of the data to be loaded.

Identifying subject-matter experts who are thoroughly acquainted with the source data will lessen the risk of missing undocumented data snags and help ensure the success of the project. Therefore, proper planning for accessibility to qualified resources at both the vendor and insurer is critical. You’ll also need experts in the existing systems, the new application and other tools.

Interfaces

Interfaces in a reinsurance context relate to connecting to the data residing in the upstream system, or PAS, to the reinsurance management system, plus integrating the reinsurance data to other applications, such as the general ledger, the claims system and business intelligence tools.

Integration and interfaces are achieved by exchanging data between two different applications but can include tighter mechanisms such as direct function calls. These are synchronous communications used for information retrieval. The synchronous request is made using a direct function call to the target system.

Again, choosing the right partner will be critical. A provider with extensive experience in developing interfaces between primary insurance systems, general ledgers, BI suites and reinsurance solutions most likely has already developed such interfaces for the most popular packages and will have the know-how and best practices to develop new ones if needed. This will ensure that the process will proceed as smoothly as possible.

After the vendor (primarily) and the carrier carry out all essential implementation specifics to consolidate the process automation and integrations required to deliver the system, look to provide a fully deployable and testable solution ready for user acceptance testing in the reinsurance system test environment.

Formal user training must take place beforehand. It needs to include a role-based program and ought not to be a “one-size-fits-all” training course. Each user group needs to have a specific training program that relates to its particular job functions.

The next step is to prepare for a deployment in production. You’ll need to perform a number of parallel runs of the existing reinsurance solutions and the new reinsurance system and be able to replicate each one and reach the same desired outcome before going live.

Now that you’ve installed a modern, comprehensive reinsurance management system, you’ll have straigh-tthrough automated processing with all the checks and balances in place. You will be able to reap the benefits of a well-thought-out strategy paired with an appropriate reinsurance system that will lead to superior controls, reduced risk and better financials. You’ll no longer have any dangerous hidden “cracks” in your reinsurance program.
This article first appeared in Carrier Management magazine.