Tag Archives: insurance technology

Insurers Should Deploy Predictive Analytics Across The Enterprise

Ovum Publishes Report On Creating An Insurance Predictive Analytics Portfolio
If the past four years are a reliable guide, the insurance industry will face complexity and even chaos in the coming months and years. Insurers need to be concerned about a future which could promise “black swan” events (those that are unpredictable, infrequent, and have a severe effect), the quickening pace of customer-driven commerce, the continuing spread of the digital economy, and the annual occurrence of severe weather events historically forecast to happen only once a century.

Insurers already know that the future holds tightening regulation, demanding customers who expect a better-quality experience, aging populations, and economies still weakened by the financial crisis. Ovum's recently published report Creating an Insurance Predictive Analytics Portfolio discusses the importance of insurers using predictive analytics to prepare for future market challenges and opportunities. We also discuss the types of quantitative professionals that insurers need, data sources and data management issues, and the areas in which insurers should apply predictive analytics.

Many Opportunities Exist For Insurers To Improve Their Competitive Position With Predictive Analytics
Where to use predictive analytics is limited only by the creative imagination of the staff responsible for its application. Ovum suggests that insurers consider creating predictive analytics initiatives in the following areas, to achieve the example objective listed. More objectives and units of analysis for each initiative are detailed in the report.

  • The insurance company itself as the focus of the initiative: To determine which markets to enter or leave.
  • Marketing: To create a portfolio of customized marketing offers.
  • Product development: To create the best product for each channel.
  • Channel management: To determine which insurance agencies to appoint.
  • Customer acquisition/retention: To estimate each customer's lifetime value to shape target market initiatives.
  • Customer services: To estimate retirement income for each life insurance customer.
  • Litigation management: To estimate litigation costs for each claim as it is reported.
  • Claim management: To determine the best way to reduce loss expenses/combined ratio for each line of business and each selling agent or claims adjuster.
  • Risk management: To estimate potential losses for the book of business as each new customer is added.
  • Cost control: To determine how the cost levers might change for different company governance structures.
  • Underwriting management: To estimate how many underwriters of what level of experience by line of business to have on staff.

The Insurance Industry Exists In A World Of Increasingly Rich Data
The insurance industry exists in a world of increasingly rich data. More and more data is available from existing sources (e.g. third-party providers offering information about weather events and forecasts, attributes of geographic locations, and consumer credit behavior), newer sources (e.g. social media), and those that are largely still conceptual (e.g. from machine-to-machine communications, also known as the “Internet of Things” — specifically from vehicle telematics).

In particular, insurers can access (although not necessarily free of charge) a never-ending torrent of (mostly) semi-unstructured and structured data from sources such as:

  • insurance business systems
  • social media
  • embedded sensors (e.g. vehicle telematics)
  • insurance company portals
  • mobile apps
  • location intelligence
  • complementary insurance information (e.g. FICO scores, building repair cost, and business formation data).

The Data Scientist Role Is Emerging As Equally Important As The Data Miner Role In The Insurance Industry
The data miner role is no longer the only one to use predictive analytics in the insurance industry; a new role of data scientist is emerging. A growing number of insurance companies are creating new departments of these types of quantitatively skilled professionals.

A data scientist and a data miner could be the same person. But the two roles should have different perspectives regarding the scope of predictive analytics initiatives and the time horizon of predictive analytical models. Moreover, data scientists may need different skills to fulfill their responsibilities. An insurer should expect a data scientist to approach a predictive analytics initiative by first collecting data — although not necessarily all the data required to complete the initiative — and then investigating the data on an iterative basis until a coherent hypothesis emerges.

Furthermore, Ovum believes that data scientists should be responsible for models that support short, medium, and long-term corporate objectives. Data miners, however, should be primarily involved with predictive analytics initiatives that support short and medium-term corporate objectives.

Waiver Of Premium: The Unmanaged Liability

This is Part 1 of a two-part series on waiver of premium. Part 2 can be found here.

Insurance actuaries consider waiver of premium (WOP) a neglected liability — a supplemental benefit rider that has yet to be fully evaluated for risk exposure or cost containment, unknowingly costing individual and group life insurance carriers billions in liability every year.

The problem is that many companies don't have accurate claim management systems capable of reporting what's really happening with the life waiver reserves that are sitting on their books. But with a 44 percent increase in disability claims by people formerly in the workplace1, it's time this largely ignored liability is held up to the light.

Why Companies Need To Pay Attention
Most life insurers aren't fully aware of how much of a liability they're carrying when it comes to their waiver of premium reserves. Moreover, they're even less likely to know critical information such as the number of open life waiver claims, the percentage of approvals and denials, or claims still holding reserves that perhaps maxed out years ago.

Tom Penn-David, Principal of the actuarial consulting firm Ant Re, LLC explains: “There are generally two components to life waiver reserves. The first is active life reserves (for individual insurers only) and the second is disabled life reserves, which is by far the larger of the two. A company that has as few as 1,000 open waiver claims with a face value of $100,000 per policy, may be reserving $25+ million on their balance sheet, depending on the age and terms of the benefits. This is a significant figure when coupled with the fact that many life insurers do not appear to be enforcing their contract provisions and have a higher than necessary claim load. Reserve reductions are both likely and substantial if the proper management systems are in place.”

Unfortunately, by not knowing what's broken the situation can't be fixed. Companies need to examine their numbers in order to recognize the level of reserve liability they're carrying, and to see for themselves the significant financial and operational consequences of not paying attention. Furthermore, a company's senior financial management team may be underestimating the actual number of their block of waiver claims, thus downplaying the potential for savings in this area. Typically, the block of existing claims is much larger than new claims added in any given year, and often represents the largest portion of overall liability.

“Life companies are primarily focused on life insurance reserves and not carefully looking at waiver of premium,” Oscar Scofield of Factor Re Services U.S. and former CEO of Scottish Re., says. “There could be a significant reserve redundancy or deficiency in disabled life reserves and companies need to pay attention to recognize the impact this has on their bottom line.”

To illustrate this point, let's take a quick look at the financial possibilities for a company with even a small block of life waiver claims:

Example – Individual Life Carrier Current Reserve Snapshot With Proactive Management
Number of Open WOP Claims 1,000 1,000
(*) Average Disability Life Reserves (DLR) $19,989,255 $19,989,255
(*) Average Mortality Reserves $3,046,722 $3,046,722
Average Premiums Paid by Carrier on Approved WOP Claims $754,427 $754,427
Average Total Reserve Liabilities $23,790,404 $23,790,404
Claim Approval Percentage 90% 60%
Reserves Based on Approval Percentage $21,411,364 $14,274,242
Potential Reserve Savings $7,137,121

* The above reserve data is based on Statutory Annual Statements.

As you can see, even under the most conservative scenarios, the reserve savings are substantial when a proactive waiver of premium claim management process is put into action.

Industry Challenges
The National Association of Insurance Commissioners (NAIC) requires life companies to report financials that include both the number of policyholders who aren't disabled with life waiver, as well as reserves for those who are currently disabled and utilizing their life waiver benefits. But many items, like the number of new claims or the amount of benefit cost are not reported. Moreover, companies rarely move beyond these life waiver reporting touch-points to effectively monitor their life waiver claim management processes or to identify the impact of contract definitions on their claim costs.

The new and ongoing volume of claim information, manual processing, and the fact that life waiver claims involve months if not years of consistent, close monitoring, is humanly challenging — if not impossible. For example, it's not out of the ordinary to have only a few people assigned to process literally thousands of life waiver claims.

It's unfortunate, but this type of manual claim reporting continues to remain unchanged as claim personnel (working primarily off of three main documents: the attending physician's statement, the employee statement, and the claim form), quickly push claims through the system. The process is such that once these documents are reviewed (and unless there are any questionable red flags), the claim continues to be viewed as eligible, is paid, and then set-up for review another 12-months down the road. As long as the requests continue to come in and the attending physician still classifies the claimant as disabled and incapable of working, there isn't much done to proactively manage and advance the claim investigation.

An equally challenging part of the life waiver claim process is working off the attending physician statement — both when claims are initially processed, as well as when they are recertified. Typically very generic in nature, the statement often only indicates whether or not the claimant is or continues to be unable to work. This problematic approach essentially permits the physician to drive the course of the claim decision away from the management of the insurance company. The insurer, who is now having to rely on the physician's report to fully understand and evaluate the scope of the claimant's medical condition, has little information in which to manage the risk.

For example, did the evaluation accurately assess the claimant's ability to work infrequently or not at all? Are they able to sit, stand, walk, lift, or drive? If so, then what are the specific measurable limitations? Is there potential to transition them back into their previous occupation or into an occupation that requires sedentary or light duty — either now or in the near future? In order for companies to move beyond the face value of what has initially been reported, and to monitor where the claimant is in the process, they need to build better business models.

Closing The Technology Gap
The insurance industry as a whole has always been a slow responder when it comes to technology. But for companies to optimize profitability, closing the gaps in life waiver claim management and operational inefficiencies will require a combination of technology and human intervention. Investing in the right blend of people, processes, and technology with real-time capabilities, can substantially reduce block loads and improve overall risk results.

Constructing a well-defined business model to apply standardized best practices that can support and monitor life waiver claims is critical. The adjudication process must move beyond obvious “low hanging fruit” to consistently evaluate the life of the claim holistically. It means not only examining open claim blocks, but also those that are closed, to better identify learning and coaching opportunities to improve future claim outcomes.

Additionally, segmentation can provide great insight into specific areas within the block, by applying predictive modeling techniques. It can evaluate how claims were originally assessed, the estimated duration, and why a claim has been extended. For example, was there something regarding the claim that occurred to warrant the extension of benefits such as change in diagnosis?

Predictive modeling also looks at how certain diagnoses are trending within the life waiver block, so if anything stands out regarding potential occupational training opportunities, benefit specialists can effectively introduce the appropriate vocational resources at the right time for the insured.

Capabilities to improve outcomes in waiver of premium operations through technology and automation should include these three primary assessments:

  • Financial: Companies need to start looking at waiver of premium differently. They need to continually evaluate the declining profit margins on in-force reserves in order to identify the impact on profits. Even if a waiver of premium reserve block is somewhere between 10 and 200 million dollars, potential savings are likely to be 10 to 20%. Better risk management tools can substantially control internal costs and improve reserve balances.
  • Operational: Current business models have to move beyond the manual process to steer the claim down the right path from start until liability determination. Standardized automation brings together fragmented, disparate information systematically across multiple platforms, essentially unifying communications between the attending physician and the insurance company. This well-managed infrastructure gathers, updates, and integrates relevant data throughout the life of the claim.
  • Availability: A critical way to improve the life waiver claim process is through accurate reporting. By breaking down the silos between the attending physician, case manager, and the insurance company, claim related information can immediately be uploaded and reported in real-time. Proactively enhancing the risk management process to enable companies to consistently receive updated claimant health evaluation and physical limitation reports, is critical for best determining return-to-work employment opportunities.

Three Technology Touch-Points in Waiver of Premium Operations

Front end: Assessment of the initial claim and determining the best possible duration time.

Mid-point: An open claim should be reassessed to determine continued eligibility and to evaluate the direction of the claim if lasting longer than projected-and why.

End-point: The evaluation process continues to ensure claims are being re-evaluated at regular intervals, examining the possibility of getting the claimant back to work.

Why Waiver Of Premium Matters
What's typically happening is that most company's life claim blocks are managed on the same platform and in the same manner as their life claims, so ultimately the life waiver block is improperly managed. Life companies need to recognize that a waiver of premium block is not a life block but a disability block, and needs to be managed differently. For example, older actuarial tables do not reflect the fact that people with disabilities are living longer, potentially leaving companies with under-stated reserve liabilities.

Ultimately, having a good handle on the life waiver block will prove beneficial for both the carrier and the insured.

Part 2 of this series will discuss specifically how the introduction of process and technology into this manual and asynchronous area can deliver substantial benefits to life carriers.

1 Social Security Administration, April 2013.

The Devil Is In The Details

Movies about space missions that result in catastrophe can teach us a lot about how not to manage a project (the “successful failure” of Apollo 13 comes to mind). Yet there are actual space mission catastrophes — the loss of the 1999 Mars Climate Orbiter (MCO), for example — that also offer valuable lessons in preventing fundamental mistakes.

The MCO was the major part of a $328 million NASA project intended to study the Martian atmosphere as well as act as a communications relay station for the Mars Polar Lander. Famously, after a nine-month journey to Mars, the MCO was lost on its attempt to enter planetary orbit. The spacecraft approached Mars on an incorrect trajectory and was believed to have been either destroyed or to have skipped off the atmosphere into space. The big question naturally was: What caused the loss of the spacecraft?

After months of investigation, the primary cause came down to the difference between the units of output from one software program and the units of input required by another. How, the media asked, could one part of the project produce output data in English measurements when the spacecraft navigation software was expecting to consume data in metric?

Those of us involved in expensive and high-risk projects would ask the similar question: How could this happen? What follows are a few findings from the Executive Summary of the Mars Climate Orbiter Mishap Investigation Board (MCO MIB), with lessons for us all.

  • The root cause of the loss of the MCO spacecraft was the failure to use metric units in the coding of a ground software file used in trajectory models. Specifically, thruster performance data in English units were used instead of metric units in the software application code.
  • An erroneous trajectory was subsequently computed using this incorrect data. This resulted in small errors being introduced in the trajectory estimate over the course of the nine-month journey.

That erroneous trajectory was the difference between a successful mission and failure. Lockheed Martin Astronautics, the prime contractor for the Mars craft, claimed some responsibility, stating that it was up to its company’s engineers to assure that the metric systems used in one computer program were compatible with the English system used in another program. The simple conversion check was not done. “It was, frankly, just overlooked,” said their spokesman.

Just overlooked? Those of us in project management know that large-scale projects require the ability to see not only the big picture — the goals and objectives of the project — but also the details.

While not as prominent as space exploration, insurance software development also has millions of dollars at stake. Insurance products can be very complex, and the interactions required in business systems along with the calculations involved are all critical to producing accurate results.

Errors in the way in which calculations are derived can produce problems ranging from failure to comply with the company’s obligations under its filings to loss of revenue. Even apparently simple matters such as whether to round up or down on a calculation can have profound impacts on a company’s bottom line.

Although the failure to address the difference between English and metric measurements was identified as the root cause of the problem with the MCO, the real issue at hand is what caused that failure. How was it missed?

Taking a project management perspective requires asking the question, “Why?” Why was a key element overlooked? What led an experienced team to miss a crucial detail?

In the search for answers, it’s interesting to look deeper inside the report by the Mars Climate Orbiter Mishap Investigation Board (MCO MIB). In addition to the root cause of failure to use standard units of measurement across the entire project, the report found a series of other issues that also contributed to the catastrophe. The following are other lessons of the MCO mission and how they can be applied more widely to project management.

  • Lack of shared knowledge. The operations navigation team was not familiar enough with the attitude controls systems on the spacecraft and did not fully understand the significance of errors in orbit determination. This made it more difficult for the team to diagnose the actual problem they were facing.

    It is likewise common for insurance software projects to have mutually dependent complex areas — for example, between the policy administration system and the billing system. If one team does not fully understand the needs of the other, there can be costly gaps in understanding.

    The MCO MIB recommended comprehensive training on the attitude systems, face-to-face meetings between the development and operations team, and attitude control experts being brought onto the operation’s navigation team. Similarly, face-to-face meetings between the policy experts and the billing experts, between the business side and the technology side, will go a long way toward a successful project. In the world of e-mail and instant messaging, I think all of us spend less face-to-face time. Nonverbal communication is 60% of our communication and is often very helpful; there’s zero face time when we rely on electronic communication.

  • Lack of contingency planning. The team did not take advantage of an existing Trajectory Correction Maneuver (TCM) that might have saved the spacecraft, since they were not prepared for it. The MCO MIB recommended that there be proper contingency planning for the use of the TCM, along with training on execution and specific criteria for making the decision to employ the TCM.

    The need for contingencies in insurance software development is important too. Strong project management will consider project risks and therefore contingencies. And contingency plans are important at every stage — development, implementation, and once the system is live. Issues must be dealt with rapidly and effectively since they have an impact on the entire business. Regular reviews of the contingency plans are also useful.

  • Inadequate handoffs between teams. Poor transition of the systems’ engineering process from development to operations meant that the navigation team was not fully aware of some important spacecraft design characteristics.

    In complex insurance software projects, there are frequent handoffs to other teams, and the transition of knowledge is a critical piece of this process. These large, complex projects should have a whole team dedicated to ensuring knowledge transfer occurs. No matter how good the specifications, once again, it’s vital to get face to face.

  • Poor communication among project teams. The report stated there was poor communication across the entire project. This lack of communication between project elements included the isolation of the operations navigation team (including lack of peer review), insufficient knowledge transfer, and failures to adequately resolve problems using cross-team discussion. As the report further notes:

    “When conflicts in the data were uncovered, the team relied on e mail to solve problems instead of formal problem resolution processes. Failing to adequately employ the problem-tracking system contributed to this problem slipping through the cracks.”

    This area had one of the largest set of recommendations from the MCO MIB, including formal and informal face-to-face meetings, internal communication forums, independent peer review, elevation of issues, and a mission systems engineer (aka really strong program or project manager) to bridge all key areas. Needless to say, this kind of communication is a critical part of any insurance software project, and these lessons are easily applied. Zealously hold project reviews (walk-throughs). Do them early and often. The time spent will pay you back with success.

  • The Operations Navigation Team was inadequately staffed. The project team was running three missions simultaneously — all of them part of the overall Mars project — and this diluted their attention to any specific part of the project. The result was an inability of the team to effectively monitor everything that required their attention.

    Sound familiar? We just experienced this on a software implementation project where the software vendor outsold its capacity to be successful. Projects are expected to run lean because of cost considerations, but it’s always important to ensure that staff is not stretched to the point of compromising the project.

  • There was a failure in the verification and validation process, including the application of the software standards that were supposed to apply. As the MCO MIB noted:

    “The Software Interface Specification (SIS) was developed but not properly used in the small forces ground software development and testing. End-to-end testing to validate the small forces ground software performance and its applicability to the specification did not appear to be accomplished.”

Every project manager will recognize the need to stick to protocol and agreed-upon processes during a software project. Ensuring that project team members know the project/system specifications and standards is essential to successful project delivery.

And so, the devil is in the details. My career in and around insurance technology has spanned three decades now. While I have learned much, two things are abundantly clear:

  • There is no substitute for really good project management.
  • There is no substitute for great business analysts.
  • There is no substitute for great communication.

Okay, make that three things! It’s bonus day.

The full MCO report is available here.

Insurance And Manufacturing: Lessons In Software, Systems, And Supply Chains

Recently, my boss Steve and I were talking about his early career days with one of those Big 8, then Big 6, then Big 5, then Big 4 intergalactic consulting firms. Steve came out of college with an engineering degree, so it was natural to start in the manufacturing industry. Learning about bills of material, routings, design engineering, CAD/CAM … “Ah yes,” he recalled, “Those were heady days.” And all those vendor-packaged manufacturing ERP systems that were starting to take the market by storm.

Eventually Steve found his way into the insurance industry, and thus began our discussion. One of the first things that struck Steve was the lack of standard software packages in the insurance industry. I don’t mean the lack of software vendors — there are plenty of those. Seemingly, though, each software solution was a one-off. Or custom. Or some hybrid combination. “Why?” we wondered.

The reasons, as we now know, were primarily reflected in an overall industry mindset:

  • A “but we are unique!” attitude was pervasive. Companies were convinced that if they all used the same software, there would be little to differentiate themselves from one another.
  • There was also an accepted industrywide, one-off approach. Conversations went something like this: “XYZ is our vendor. We really don’t like them. Taking new versions just about kills us. We don’t know why we even pay for maintenance, but we do.”

But the chief reason for a lack of standard software was the inability to separate product from process. What does this mean?

Well, you can certainly envision that your auto product in Minnesota is handled differently than your homeowners’ product in California. I’m not referring to just the obvious elements (limits, deductibles, rating attributes), but also the steps required for underwriting, renewal, and cancellation. Separation of product from process must go beyond the obvious rate/rule/form variations to also encompass internal business and external compliance process variations.

But there’s still plenty of processing — the heavy lifting of transaction processing — that’s the same and does not vary. For example, out-of-sequence endorsement processing is not something that makes a company unique and therefore would not require a custom solution.

Where the rubber meets the road, and where vendor packages have really improved their architecture over the last several years, is by providing the capability in their policy admin systems for companies to “drop” very specific product information, along with associated variations, into a very generic transaction system.

Once product “components” (digitized) are separated from the insurance processing engine, and once companies have a formal way to define them (standard language), they can truly start making their products “unique” with reuse and mass customization. Much like those manufacturing bills of material and routings looked to Steve way back when.

This separation of policy from product has been a key breakthrough in insurance software. So what is an insurance product, at least in respect to systems automation?

From Muddled To Modeled
The typical scenario to avoid goes something like this:

  • The business people pore over their filings and manuals and say, “This is the product we sell and issue.”
  • The IT people pore over program code and say, “That’s the product we have automated.”
  • The business people write a lot of text in their word processing documents. They find a business analyst to translate it into something more structured, but still text.
  • The business analyst finds a designer to make the leap from business text to IT data structures and object diagrams.
  • The designer then finds a programmer to turn that into code.

One version of the truth? More like two ships passing, and it’s more common than you may think. How can organizations expect success when the product development process is not aligned? Without alignment, how can organizations expect market and compliance responsiveness?

What’s the alternative? It revolves around an insurance “product model.” Much like general, industry-standard data models and object models, a product model uses a precise set of symbols and language to define insurance product rates, rules, and forms — the static or structural parts of an insurance product. In addition, the product model must also define the actions that are allowed to be taken with the policy during the life of the contract — the dynamic or behavioral aspect of the product model. So for example, on a commercial auto product in California, the model will direct the user to attach a particular form (structure) for new business issuance only (actions).

Anyone familiar with object and data modeling knows there are well-defined standards for these all-purpose models. For insurance product modeling, at least currently, such standards are more proprietary, such as IBM’s and Camilion’s models, and of course there are others. It is interesting to note that ACORD now has under its auspices the Product Schema as the result of IBM’s donation of aspects of IAA. Might this lead to more industry standardization?

With product modeling as an enabler, there’s yet another key element to address. Yes, that would be the product modelers — the people responsible for making it work. Product modeling gives us the lexicon or taxonomy to do product development work, but who should perform that work? IT designers with sound business knowledge? Business people with analytical skills? Yes and yes. We must finally drop the history of disconnects where one side of the house fails to understand the other.

With a foundation of product modeling and product modelers in place, we can move to a more agile or lean product life cycle management approach — cross-functional teams versus narrow, specialized skills; ongoing team continuity versus ad hoc departmental members; frequent, incremental product improvements versus slow, infrequent, big product replacements.

It all sounds good, but what about the product source supplier — the bureaus?

Supply Chain: The Kinks In Your Links
Here is where the comparison between insurance and manufacturing takes a sharp turn. In their pursuit of quality and just-in-time delivery, manufacturers can make demands on their supply chain vendors. Insurance companies, on the other hand, are at the mercy of the bureaus. ISO, NCCI, and AAIS all develop rates, rules, and forms, of course. They then deliver these updates to their member subscribers via paper manuals or electronically via text.

From there the fun really begins. Insurance companies must log the info, determine which of their products and territories are impacted, compare the updates to what they already have implemented and filed, conduct marketing and business reviews, and hopefully and eventually, implement at least some of those updates.

Recent studies by Novarica and SMA indicate there are approximately 3,000 to 4,000 changes per year in commercial lines alone. The labor cost to implement just one ISO circular with a form change and a rate change is estimated to be $135,000, with the majority of costs in the analysis and system update steps.

There has got to be a better way …

ISO at least has taken a step in right direction with the availability of its Electronic Rating Content. In either Excel or XML format, ISO interprets its own content to specify such constructs as premium calculations (e.g., defined order of calculation, rounding rules), form attachment logic (for conditional forms), and stat code assignment logic (to support the full plan).

A step in the right direction, no doubt. But what if ISO used a standard mechanism and format to do this? ACORD now has under its control the ACORD Product Schema. This is part of IBM’s fairly recent IAA donation. It provides us a standard way to represent the insurance product and a standard way to integrate with policy admin systems. What if ISO and the other key providers in the product supply chain started it all off this way?

Dream on, you say? While you may not have the clout to demand that the bureaus change today, you do pay membership fees, and collectively the members have a voice in encouraging ongoing improvements in the insurance “supply chain.”

In the meantime, the goal to be lean and agile with product life cycle management continues. We must respond quickly and cost-effectively to market opportunities, policyholder feedback, and regulatory requirements. That all starts at the product source … but it doesn’t end there. So while the supply chain improves its quality and delivery, insurance companies will need to gain efficiencies throughout every corner of their organizations in order to achieve those lean goals.

In writing this article, David collaborated with his boss Steve Kronsnoble. Steve is a senior manager at Wipfli and an expert in the development, integration, and management of information technology. He has more than 25 years of systems implementation experience with both custom-developed and packaged software using a variety of underlying technologies. Prior to Wipfli, Steve worked for a major insurance company and leverages that experience to better serve his clients.

Predictive Analytics And Underwriting In Workers' Compensation

Insurance executives are grappling with increasing competition, declining return on equity, average combined ratios sitting at 115 percent and rising claims costs. According to a recent report from Moody’s, achieving profitability in workers’ compensation insurance will continue to be a challenge due to low interest rates and the decline in manufacturing and construction employment, which makes up 40% of workers’ comp premium.

Insurers are also facing significant changes to how they run underwriting. The industry is affected more than most by the aging baby boomer population. In the last 10 years, the number of insurance workers 55 or older has increased by 74 percent, compared to the 45 percent increase for the overall workforce. With 20 percent of the underwriter workforce nearing retirement, McKinsey noted in a May 2010 Report that we will need 25,000 new underwriters by 2014. Where will the new underwriters come from? And more importantly, what will be the impact on underwriting accuracy?

Furthermore, there’s no question that technology has fundamentally changed the pace of business. Consider the example of FirstComp reported by The Motley Fool in May 2011. FirstComp created an online interface for agents to request workers’ compensation quotes. What they found was remarkable. When they provided a quote within one minute of the agent’s request, they booked that policy 52% of the time. However, their success percentage declined with each passing hour that they waited. In fact, if FirstComp waited a full 24 hours to respond, their close rate plummeted to 30 percent. In October 2012, Zurich North America was nominated for the Novarica Research Council Impact Award for reducing the time it takes to quote policies. In one example, Zurich cut the time it took to quote a 110-vehicle fleet from 8 hours to 15 minutes.

In order to improve their companies’ performance and meet response time expectations from agents, underwriters need advanced tools and methodologies that provide access to information in real-time. More data is available to underwriters, but they need a way to synthesize “big data” to make accurate decisions more quickly. When you combine the impending workforce turnover with the need to produce quotes within minutes, workers’ comp carriers are increasingly turning toward the use of advanced data and predictive analytics.

Added to these new industry dynamics is the reality that both workers’ compensation and homeowners are highly unprofitable for carriers. According to Insurance Information Institute’s 2012 Workers’ Compensation Critical Issues and Outlook Report, profitable underwriting was the norm prior to the 1980s. Workers’ comp has not consistently made an underwriting profit for the last few decades for several reasons including increasing medical costs, high unemployment and soft market pressures.

What Is Predictive Analytics?
Predictive analytics uses statistical and analytical techniques to develop predictive models that enable accurate predictions about future outcomes. Predictive models can take various forms, with most models generating a score that indicates the likelihood a given future scenario will occur. For instance, a predictive model can identify the probability that a policy will have a claim. Predictive analytics enables powerful, and sometimes counterintuitive, relationships among data variables to emerge that otherwise may not be readily apparent, thus improving a carrier’s ability to predict the future outcome of a policy.

Predictive modeling has also led to the advent of robust workers’ compensation “industry risk models” — models built on contributory databases of carrier data that perform very well across multiple carrier book profiles.

There are several best practices that enable carriers to benefit from predictive analytics. Large datasets are required to build accurate predictive models and to avoid selection bias, and most carriers need to leverage third party data and analytical resources. Predictive models allow carriers to make data-driven decisions consistently across their underwriting staff, and use evidenced-based decision making rather than relying solely on heuristics or human judgment to assess risk.

Finally, incorporating predictive analytics requires an evolution in terms of people, process, and technology, and thus executive level support is important to facilitate adoption internally. Carriers who fully adopt predictive analytics are more competitive in gaining profitable market share and avoiding adverse selection.

Is Your Organization Ready For Predictive Analytics?
As with any new initiative, how predictive analytics is implemented will determine its success. Evidence-based decision-making provides consistency and improved accuracy in selecting and pricing risk in workers’ compensation. Recently, Dowling & Partners Securities, LLC, released a special report on predictive analytics and said that the “use of predictive modeling is still in many cases a competitive advantage for insurers that use it, but it is beginning to be a disadvantage for those that don’t.” The question for many insurance executives remains: Is this right for my organization and what do we need to do use analytics successfully?

There are a few important criteria and best practices to consider when implementing predictive analytics to help drive underwriting profitability.

  • Define your organization’s distinct capability as it relates to implementing predictive analytics within underwriting.
  • Secure senior management commitment and passion for becoming an analytic competitor, and keep that level of commitment for the long term. It will be a trial and error process, especially in the beginning.
  • Dream big. Organizations that find the greatest success with analytics have big, important goals tied to core metrics for the performance of their business.