Tag Archives: integration

Integrating Group Life and Voluntary Benefits

Group and voluntary benefits providers vary in a hundred different ways. If you are a supplementary benefits provider that only provides one product to the group market, your data integration issues with multiple brokers and employers may still be complex. The more products you sell into the group and voluntary space, the more difficult your data integration will be.

Let’s say that your organization carries group life, voluntary supplemental life, dependent life, LTC and AD&D products. Without modernization, it is likely that your organization will have several hurdles to surmount. The first is to develop one consolidated repository from all of the data that is likely held on multiple systems. The second is to make that set of data available to the many different people and institutions that have a vested interest in access. On the flip side, insurers need to be able to receive data efficiently, as well. Carriers must be able to import data received from various benefit partners into their source systems through a single point of entry. Without this, entry or import issues could lead to benefit integrity issues, where data is are correct on one platform but incorrect on another. These types of basic data errors will quickly erode relationships with employees and benefit partners.

One way to help alleviate potential data issues is for insurers to focus on providing simple products with simple rate structures. Focus on guaranteed issue limits. Anything that has to be approved or underwritten after payroll deductions begin will cause deduction and billing issues. An exception could be made if an insurer is able to provide automated underwriting decisions at the point of sale.

The data requirements for employers and enrollment partners vary widely (in part because no standards exist), which places more of the data integration responsibility on individual carriers to interact with individual employers or benefits companies. So, the easier it is for your IT teams or vendor partners to make those connections, the better off you are likely to be when it comes time for an employer to renew their contracts. It makes sense to pursue a course that keeps your systems agile.

What about a fresh start?

When it makes sense, we regularly recommend that, instead of attempting to migrate current and past business to a new platform, insurers start fresh with a new system dedicated solely to the one program. If an insurer is moving into a new market or launching new products, why not learn from past system issues and product issues and embrace a clean slate, eliminating the need to translate and carry cumbersome legacy programming into a new environment? Start with a brand new set of products and filings, a brand new marketing plan…perhaps even a brand new name to signify the difference.

Within group and voluntary benefits, this approach makes its case when looking at just a few of the benefits, including simplified testing, fewer resources required to launch, less expense, less risk to the old system and old data and dramatically increased flexibility in data usage, capability development and integration points. Managers who touch the system are far more likely to trust the data they see, reducing a “checks and balances” approach to billing, reconciling, correspondence and a dozen other areas where the need for clean data and quick visualization are essential.

We’ll discuss more about data strategies in the coming months, including ways you can build effective technology bridges and keep a high level of data integrity.

Integrating Strategy, Risk and Performance

While many (including me) talk about the need for integrating the setting and execution of strategy, the management of risk, decision-making and performance monitoring, reporting and management, there isn’t a great deal of useful guidance on how to do it well.

A recent article in CGMA Magazine, 8 Best Practices for Aligning Strategy, Planning and Risk, describes a methodology used by Mass Mutual that it calls the “Pinwheel.”

There are a number of points in the article that I like:

  • “Success in business is influenced by many factors: effective strategy and execution; deep understanding of the business environment, including its risks; the ability to innovate and adapt; and the ability to align strategy throughout the organization.”
  • “The CEO gathers senior corporate and business unit leaders off-site three times a year. As well as fostering transparency, teamwork and alignment, this ensures that the resulting information reaches the board of directors in time for its meetings….The result: The leadership team is more engaged in what the company’s businesses are doing, not just divisional priorities. This makes them more collaborative and informed leaders. This helps foster a more unified brand and culture across the organization.”
  • “A sound understanding of global business conditions and trends is fundamental to effective governance and planning.”
    Comment: Understanding the external context is critical if optimal objectives and strategies are to be set, with an adequate understanding of the risks inherent in each strategy and the relative merits of every option.
  • “Strategy and planning is a dynamic process, and disruptive innovation is essential for cultural change and strategic agility. Management and the board must continually consider new initiatives that may contribute to achieving the organization’s long-term vision and aspirations.”
  • Key risk indicators are established for strategies, plans, projects and so on.
  • “Evaluation and monitoring to manage risks and the overall impact on the organization is an ongoing process….Monitoring is a continuous, multi-layered process. In addition to quarterly monitoring of progress against the three-year operating plan and one-year budget, the company has initiated bottom-up ‘huddle boards’ that provide critical information across all levels of the organization.”
  • “Effective governance requires a tailored information strategy for the executive leadership team and the board of directors…. This should include: essential information needed to monitor and evaluate strategic execution of the organization; risks to the achievement of long-term objectives; and risks related to conforming to compliance and reporting requirements.”
  • “Integrating the ERM, FP&A and budget functions can help to manage risks effectively and to allocate limited capital more quickly and efficiently.”

I am not familiar with the company and its methodology, but based on the limited information in the article I think there are some areas for improvement:

1. Rather than selecting strategies and objectives and only then considering risk, the consideration of risk should be a critical element in the strategy-selection process.

2. The article talks about providing performance and risk information separately to the corporate development and risk functions. Surely, this should be integrated and used primarily by operating management to adjust course as needed.

3. I am always nervous when the CFO and his team set the budget and there is no mention of how operating management participates in the process. However, it is interesting that the risk function at Mass Mutual is involved.

What do you think? I welcome your comments.

Why Implementations of Core Systems Fail

As an engineer (at least that’s what my university degree says), I must say I like to solve problems. Big, ugly, complex problems can a great challenge.

We all know what has been happening with insurers’ core systems over the past several years. To respond to the challenging needs for product agility, customer-centricity and operational effectiveness, insurance companies are moving toward new core systems and away from the constraints of their legacy systems. And there are oodles of problems to be solved. Product modeling and patterns, configurability and customization options, integration and connectivity, external data sources, testing automation…it’s a tasty list, my friend.

And yet…and yet.

Even if these complex problems are nicely solved, many insurance companies fail to achieve the anticipated returns with their new core systems.

Over the past years of these types of projects, when we at Wipfli analyze the root causes, we find that the following risks have not been properly managed or mitigated:

1. Expectation risk – Are we all looking for the same things?
2. Acceptance risk – What could prevent us from leveraging this investment?
3. Alignment risk – What could prevent us from achieving the value we expect?
4. Execution risk – Are we getting things done effectively and efficiently?
5. Solution risk – Will this solution deliver on its potential?
6. Resource risk – Have we accounted for the total investment required for success?

What’s most enlightening about these risks is that five of them are about people and not technology. Only solution risk encompasses technology. As the engineer once said, “This project would have been a roaring success except for the people!” Don’t be that guy….

The desired future state following an implementation is only achieved when individual contributors do their jobs differently.

So, yes, systems projects are about the people.

The 5 I’s of Underwriting

The benefits of next-generation underwriting for complex risks are quantifiable and real. So, when, where and how to start?

When? Now. The sooner, the better.

Where? It all starts with understanding the possible. You need to know what is realistically possible with the offerings that are available today. It is equally important to figure out what will be possible in the not too distant future. Once you’ve got a grip on the possibilities, it’s time to set priorities. Describe the capabilities that go on the priority list using business terminology. This makes it much easier to have meaningful conversations between the business and IT interests. It is very important to look at how your plans for underwriting will align and work in concert with your policy administration system. Figure out what the best path for your organization is, and then just make it happen.

How? The most effective path for making progress depends on the characteristics and culture of the company. For some insurers, shifting focus to the possibilities for underwriting gets things moving. Other organizations might need some help or just a kick-start. You can move the ball forward significantly by bringing in advisers who can describe what the options are and then put the value in context for your company. The important thing is to make progress one way or another. Time is of the essence.

Simply put, the goal of underwriting is to maximize efficiency and effectiveness. SMA’s concept of modern underwriting capabilities can best be described by using the 5 I’s: Intuitive, Intelligent, Interconnected, Informative and Insightful. The next-generation insurers are embracing solutions that embody these characteristics, and they are reaping the benefits.

What do these 5 I’s mean for you? Let’s explore:

Intuitive — A user-experience-centric desktop, an intuitive desktop, saves time spent hunting and searching for information, and it eliminates rekeying into several systems. It also reduces the learning curve and ties directly to the main goals of underwriting: efficiency and effectiveness.

Intelligent — For complex risks that require the touch of an underwriter, the modern underwriting workstation can significantly augment the expertise and experience by incorporating and taking advantage of new sources of data and models. This new level of intelligence automation will help make better decisions and provide controlled discipline.

Interconnected — Modern underwriting capabilities are delivered through a variety of solutions that are tightly integrated with everything underwriting needs and feeds. The required capabilities extend beyond what a single solution can deliver. The requirements include an interconnected, intelligent, modern platform that facilitates easy integration and synchronization with core systems, tools, spreadsheets, models and data, as well as external data sources.

Informative and Insightful — Modern platforms provide underwriting with data and analytics like never before. Emerging technologies, as well as an abundance of new information, are generating new possibilities for underwriting and new ways to accomplish far-reaching transformation for the next generation of underwriting excellence. It is now possible to make smarter, more informed decisions by using new sources of data and models. New levels of sophistication in the information about both risk and customer intelligence are possible.

Looking back on my past-life as an insurer, I am in awe of today’s possibilities. The power that data and analytics are giving our industry is boundless. Just thinking about how far underwriting has come in a very short time makes me even more excited for the future! This is why at SMA we consider “Interconnect Intelligence for Underwriting” an imperative. It is critical to becoming a next-gen insurer. The world is moving as fast as we think it is. Any steps you can take to gain an edge by improving efficiency and effectiveness are must-take steps!

Disjointed Reinsurance Systems: A Recipe for Disaster

Insurers’ numerous intricate reinsurance contracts and special pool arrangements, countless policies and arrays of transactions create a massive risk of having unintended exposure. The inability to ensure that each insured risk has the appropriate reinsurance program associated with it is a recipe for disaster.

Having disjointed systems—a combination of policy administration system (PAS) and spreadsheets, for example—or having systems working in silos are sure ways of having risks fall through the cracks. The question is not if it will happen but when and by how much.

Beyond excessive risk exposure, the risks are many: claims leakage, poor management of aging recoverables and lack of business intelligence capabilities. There’s also the likelihood of not being able to track out-of-compliance reinsurance contracts. For instance, if a reinsurer requires certain exclusion in the policies it reinsures and the direct writer issues the policy without the exclusion, then the policy is out of compliance, and the reinsurer may deny liability.

The result is unreliable financial information for trends, profitability analysis and exposure, to name a few.

Having fragmented solutions and manual processes is the worst formula when it comes to audit trails. This is particularly troubling in an age of stringent standards in an increasingly internationally regulated industry. Integrating the right solution will help reduce risks to an absolute minimum.

Consider vendors offering dedicated and comprehensive systems as opposed to policy administration system vendors, which may simply offer “reinsurance modules” as part of all-encompassing systems. Failing to pick the right solution will cost the insurer frustration and delays by attempting to “right” the solution through a series of customizations. This will surely lead to cost overruns, a lengthy implementation and an uncertain outcome. An incomplete system will need to be customized by adding missing functions.

Common system features a carrier should look out for are:
  • Cession treaties and facultative management
  • Claims and events management
  • Policy management
  • Technical accounting (billing)
  • Bordereaux/statements
  • Internal retrocession
  • Assumed and retrocession operations
  • Financial accounting
  • AP/AR
  • Regulatory reporting
  • Statistical reports
  • Business intelligence
Study before implementing

Picking the right solution is just the start. Implementing a new solution still has many pitfalls. Therefore, the first priority is to perform a thorough and meticulous preliminary study.

The study is directed by the vendor, similar to an audit through a series of meetings and interviews with the different stakeholders: IT, business, etc. It typically lasts one to three weeks depending on the complexity of the project. A good approach is to spend a half-day conducting the scheduled meeting(s) and the other half drafting the findings and submitting them for review the following day.

The study should at least contain the following:

  • A detailed report on the company’s current reinsurance management processes.
  • A determination of potential gaps between the carrier reinsurance processes and the target solution.
  • A list of contracts and financial data required for going live.
  • Specifications for the interfaces.
  • Definitions of the data conversion and migration strategy.
  • Reporting requirements and strategy.
  • Detailed project planning and identification of potential risks.
  • Repository requirements.
  • Assessment and revision of overall project costs.
Preliminary study/(gap analysis) sample:

1. Introduction
  • General introduction and description of project objectives and stakeholders
  • What’s in and out of scope
2. Description of current business setting

3. Business requirements

  • Cession requirements
  • Assumed and retrocession requirements
4. Systems Environment Topics
  • Interfaces/hardware and software requirements
5. Implementation requirements
6. System administration
  • Access, security, backups
7. Risks, pending issues and assumptions
8. Project management plan

The preliminary study report must be submitted to each stakeholder for review and validation as well as endorsement by the head of the steering committee of the insurance company before the start of the project. If necessary, the study should be revised until all parts are adequately defined. Ideally, the report should be used as a road map by the carrier and vendor.

All project risks and issues identified at this stage will be incorporated into the project planning. It saves much time and money to discover them before the implementation phase. One of the main reasons why projects fail is poor communication. Key people on different teams need to actively communicate with each other. There should be at  least one person from each invested area—IT, business and upper management must be part of a well-defined steering committee.

A clear-cut escalation process must be in place to tackle any foreseeable issues and address them in a timely manner.

A Successful Implementation Process
Key areas and related guidelines that are essential to successfully carry out a project.

Data cleansing
Before migration, an in-depth data scrubbing or cleansing is recommended. This is the process of amending or removing data derived from the existing applications that is erroneous, incomplete, inadequately formatted or replicated. The discrepancies discovered or deleted may have been originally produced by user-entry errors or by corruption in transmission or storage.

Data cleansing may also include actions such as harmonization of data, which relates to identifying commonalities in data sets and combining them into a single data component, as well as standardization of data, which is a means of changing a reference data set to a new standard—in other words, use of standard codes.

Data migration

Data migration pertains to the moving of data between the existing system (or systems) and the target application as well as all the measures required for migrating and validating the data throughout the entire cycle. The data needs to be converted so that it’s compatible with the reinsurance system before the migration can take place.

It’s a mapping of all the data with business rules and relevant codes attached to it; this step is required before the automatic migration can take place.

An effective and efficient data migration effort involves anticipating potential issues and threats as well as opportunities, such as determining the most suitable data-migration methodology early in the project and taking appropriate measures to mitigate potential risks. Suitable data migration methodology differs from one carrier to another based on its particular business model.

Analyze and understand the business requirements before gathering and working on the actual data. Thereafter, the carrier must delineate what needs to be migrated and how far back. In the case of long-tail business, such as asbestos coverage, all the historical data must be migrated. This is because it may take several years or decades to identify and assess claims.

Conversely, for short-tail lines, such as property fire or physical auto damage, for which losses are usually known and paid shortly after the loss occurs, only the applicable business data is to be singled out for migration.

A detailed mapping of the existing data and system architecture must be drafted to isolate any issues related to the conversion early on. Most likely, workarounds will be required to overcome the specificities or constraints of the new application. As a result, it will be crucial to establish checks and balances or guidelines to validate the quality and accuracy of the data to be loaded.

Identifying subject-matter experts who are thoroughly acquainted with the source data will lessen the risk of missing undocumented data snags and help ensure the success of the project. Therefore, proper planning for accessibility to qualified resources at both the vendor and insurer is critical. You’ll also need experts in the existing systems, the new application and other tools.

Interfaces

Interfaces in a reinsurance context relate to connecting to the data residing in the upstream system, or PAS, to the reinsurance management system, plus integrating the reinsurance data to other applications, such as the general ledger, the claims system and business intelligence tools.

Integration and interfaces are achieved by exchanging data between two different applications but can include tighter mechanisms such as direct function calls. These are synchronous communications used for information retrieval. The synchronous request is made using a direct function call to the target system.

Again, choosing the right partner will be critical. A provider with extensive experience in developing interfaces between primary insurance systems, general ledgers, BI suites and reinsurance solutions most likely has already developed such interfaces for the most popular packages and will have the know-how and best practices to develop new ones if needed. This will ensure that the process will proceed as smoothly as possible.

After the vendor (primarily) and the carrier carry out all essential implementation specifics to consolidate the process automation and integrations required to deliver the system, look to provide a fully deployable and testable solution ready for user acceptance testing in the reinsurance system test environment.

Formal user training must take place beforehand. It needs to include a role-based program and ought not to be a “one-size-fits-all” training course. Each user group needs to have a specific training program that relates to its particular job functions.

The next step is to prepare for a deployment in production. You’ll need to perform a number of parallel runs of the existing reinsurance solutions and the new reinsurance system and be able to replicate each one and reach the same desired outcome before going live.

Now that you’ve installed a modern, comprehensive reinsurance management system, you’ll have straigh-tthrough automated processing with all the checks and balances in place. You will be able to reap the benefits of a well-thought-out strategy paired with an appropriate reinsurance system that will lead to superior controls, reduced risk and better financials. You’ll no longer have any dangerous hidden “cracks” in your reinsurance program.
This article first appeared in Carrier Management magazine.