Tag Archives: actuaries

Graph Theory, Network Analysis Aid Actuaries

Most traditional insurers find it overwhelming to transform the innumerable sensitive actuarial processes needed for day-to-day functioning. This problem is amplified by most insurance actuaries spending most of their time on secondary activities, such as data reconciliation, rather than focusing on core actuarial tasks such as modeling, strategy development and root cause analysis. These secondary activities are usually low-value, repeatable and time-consuming tasks. 

It’s crucial to understand that, unlike other insurance processes, actuarial processes are complex and time-consuming and have a high number of touchpoints. Dynamic, frequently changing regulations can make these processes even more complicated.

For instance, the New York Department of Financial Services (NYDFS) published its Circular Letter Number 1 in 2019 on the use of big data in underwriting life insurance. The NYDFS states that “an insurer should not use external data sources, algorithms or predictive models in underwriting or rating unless the insurer has determined that the processes do not collect or utilize prohibited criteria and that the use of the external data sources, algorithms or predictive models are not unfairly discriminatory.”

This presents a need for full transparency to explain the variables computed and their effects, as well as a need for efficiency so that actuaries spend their time on analysis rather than data reconciliation. Other priorities will depend on the processes. For example, pricing and ALM modeling processes require greater flexibility and transparency, whereas valuation and economic projection models require more precision and prioritize governance over flexibility and transparency.

Irrespective of the modeling processes, legacy source systems, fragmented data, error-prone manual processes and a lack of data standardization lead to problems within actuarial organizations. Analyzing actuarial processes is quite complex due to the interdependencies and relationships of subtasks and files. With advancements in the field of artificial intelligence (AI) and machine learning (ML), copious amounts of data can be processed quite efficiently to identify hidden patterns. Network analysis is widely used in other domains to analyze different elements of a network. Within insurance, it can be applied for fraud detection and marketing. This paper describes an approach where network analysis is leveraged for actuarial process transformation. 

A Coming Science: Graphs and Network Analysis

Graph and network analysis helps organizations gain a deep understanding of their data flows, process roadblocks and other trends and patterns. The first step for graph and network analysis involves using tools to develop visual representations of data to better understand the data. The next step consists of acting on this data, typically by carefully analyzing graph network parameters such as centrality, traversal and cycles.

A graph is a data structure used to show pairwise relationships between entities. It consists of a set of vertices (V) and a set of edges (E). The vertices of a graph represent entities, such as persons, items and files, and the edges represent relationships among vertices. 

Graphs can be directed or undirected. An undirected graph (Figure 1) is where there is a symmetric relationship between nodes (A to B implies B to A), whereas a directed graph (Figure 2) is asymmetric. In the case of process improvements, the dependencies of one task or file with the others in the process need to be modeled. The relationship is asymmetric, and therefore should be modeled through a directed graph. 

See also: Big Changes Coming for Workers’ Comp

Network Analysis Basics and Process Improvements

Graphs provide a better way of dealing with the dependencies in the various data files, data systems and processes. Once any process is represented as a graph, there are multiple operations and analyses that can be performed. For instance, influencer nodes can be easily identified using centrality measures. Similarly, cycles, cliques and paths can be traced along the network to optimize flow. Network analysis helps assess the current state of processes to identify gaps or redundancies and determine which processes provide maximum value. 

Three key analyses are the most important in any process improvement framework:

  1. Identifying process and data nodes that are crucial in the network 
  2. Tracing from the input to the output in the processes to identify touchpoints
  3. Identifying cyclical references and dependencies in the network and making the flow linear

1. Influential Nodes: Centrality

Centrality measures the influence of a node in a network. As a node’s influence can be viewed differently, the right choice of centrality measures will depend on the problem statement. 

  • Degree Centrality: Degree centrality measures influence based on the number of incoming and outgoing connections of a node. For a directed network, this can be further broken down into in-degree centrality for incoming connections, and out-degree centrality for outgoing connections.
  • Between-ness Centrality: Between-ness centrality measures the influence of a node over the information flow of a network. It assumes that the information flows through the shortest path and captures the number of times a particular node appears in that path. 

These different centrality measures can be used to derive insights about a network. While degree centrality defines strength as the number of neighbors, between-ness centrality defines strength as control over information passing between other neighbors through the node. Nodes that are high in both degrees are the influential nodes in the network. 

2. Graph traversal

Graph traversals are used to understand the flow within the network. They are used to search for nodes within a network by passing through each of the nodes of the graph. Traversals can be made to identify the shortest path or to search for connected vertices in a graph. The latter is of particular importance for making actuarial process improvements. Understanding the path of data throughout the process can help evaluate the process holistically and identify improvement opportunities.

3. Cliques and Cycles

A clique is a set of vertices in an undirected graph where every two distinct vertices are connected to each other. Cliques are used to find communities in a network and have varied applications in social network analysis, bioinformatics and other areas. For process improvement, cliques find an application in identifying local communities of processes and data. For directed graphs, finding cycles are of great importance in process improvement, as insights mined from investigating cyclical dependencies can be quite useful. 

Step Approach for an Actuarial Transformation Using Graph Theory

1. Understanding the Scope of Transformation

Understanding the scope of transformation is of key importance. The number of output touchpoints and files used by the organization is often significantly less than the number of files produced. Moreover, due to evolving regulations, actuarial processes can undergo changes. Some of the key questions to answer at this stage include: 

  • Which processes are in the scope of the transformation?
  • Will these processes undergo changes in the near future due to regulations (US GAAP LDTI/IFRS 17)? 
  • Are all the tasks and files for the chosen process actually required, or is there a scope for rationalization?

2. Understanding Data Flow

Once the scope of the transformation is defined, data dependencies need to be traced. Excel links, database queries and existing data models need to be analyzed. In some cases, manually copying and pasting the data creates breaks in the data flow. In such cases, the analyst needs to fill in the gaps and create the end-to-end flow of the data. Some key aspects to consider at this stage are: 

  • What are the data dependencies in the process?
  • Are there breaks in the data flow due to manual adjustment?
  • What are the inputs, outputs and intermediate files? 

3. Implementing the Network of Files

After mapping the data flow, the graph network can be constructed. The network can then be analyzed to identify potential opportunities, identify key files, make data flows linear and create the goal state for the process. The key analysis to perform at this stage are:

  • Identifying important nodes in the network through degree measures
  • Capturing redundant intermedia files in the system
  • Capturing cyclical-references and patterns in the process

Based on the analysis of the network, bottlenecks and inconsistencies can be easily identified. This information can lead to process reengineering and end-to-end data-based process transformation. The results can be validated with business users, and changes can be made. The figures below show some of the patterns that can be captured using network graphs. The input, intermediate and output nodes are color-coded as blue, grey and red respectively.  

The Benefits of Actuarial Process Transformation Using Graph Theory

Due to the inherent complexity of actuarial processes, decomposing process and data flows can be difficult. While analyzing any actuarial sub-process at the lowest level of granularity, it is quite possible to discover multiple related files with lots of related calculations. Moreover, a major challenge quite common in actuarial processes is a lack of data documentation. Graph theory enables insurers to overcome these challenges: 

  • Creating a Data Lineage From Source System to Output: Graph networks help improve the quality of data feeding into subsequent sub-processes. This benefits actuaries, as higher-quality data produces better models regardless of the techniques being employed
  • Improved Visualization and Bottleneck Identification: Graph networks help visualize the relationship between various databases. The networks also help build a foundation for a data factory that not only creates a 360-degree view of useful information, enables data visualization and enables future self-service analytics. Moreover, several analyses can identify process bottlenecks that can be investigated further.
  • Enabling Flexibility and Governance: On the surface, flexibility and governance may sound like competing priorities. Increased flexibility makes it difficult to control what is happening in the process and leads to increased security risks. However, graph theory helps manage governance by visualizing complicated data relationships and helps in maintaining data integrity. 
  • Speed of Analysis: Traditionally, most of the time spent producing models is used to gather, clean and manipulate data. Graph theory helps in driving dependencies, enabling efficient processes and providing quicker results for a given problem. Graph theory can be used to rationalize non-value-adding files or processes, leading to streamlined and automated process flows. By linking the data elements from outputs to source systems, organizations can analyze processes in depth through back propagation. 

Case Example

A major life insurance player in the U.S. engaged EXL to examine its annuities valuation process and identify process improvement opportunities. There were multiple interfaces in the annuities valuation process, and many stakeholders were involved. Regulatory frameworks, a high number of touchpoints, actuarial judgment and manual adjustments made the annuities valuation process complex. Moreover, the client had multiple source systems from which data were pulled. Data came to the actuarial team through SQL servers, data warehouses, Excel, Access databases and flat files. As a result of the data fragmentation, a significant amount of effort was spent on data reconciliation, data validation and data pulls. While some aspects of these steps were automated, many of the processes were manually intensive, wasting actuarial bandwidth. 

EXL deployed a two-speed approach, tackling the problem from a short-term local optimization as well as from a long-term process improvement perspective. The local optimization approach focused on understanding the standard operating procedures for the individual tasks to automate the manual efforts. These optimizations generated quick wins but did not address the overall efficiency and improvement goals per se. 

See also: The Data Journey Into the New Normal

Knowing that there was a possibility of finding multiple tasks that can be rationalized, EXL prioritized and balanced the local and long-term improvements. This included speaking to multiple stakeholders to identify the regulatory GAAP processes for deferred annuities that needed to be focused on in the long term, and what the other processes could be addressed through local optimization. 

For the deferred annuities GAAP process, EXL leveraged network analysis to analyze the file dependencies. Each of the hundreds of process files and tasks were categorized into pure inputs, outputs and intermediates. These files were modeled as nodes in the network, while the data flows were modeled as edges. To capture the data linkages, a Visual Basic Macro (VBM)-based tool was deployed that automatically identified the Excel links and formulae to capture dependencies. Centrality measures were calculated for each of the files and then attached to the node attributes. The centrality measures showed important sub-processes and communities of files. For example, the topside sub-processes ingested more than 20 files and were high on degree centrality. Annual reporting sub-processes were high on degree centrality. 

The team also found 11 avoidable cyclical references for data flows. These data flows were made linear to create the goal process state. Moreover, it was also observed that some of the intermediate files were merely being used to stage the data. These files had basic data checks embedded but did not add a lot of value. These files were rationalized. Network analysis helped in providing an understanding of the data flows and creating the to-be state for process improvement. Moreover, the time required to analyze hundreds of tasks and files was reduced significantly. The team was able to identify an over 30% reduction in effort through a combination of automation and data-based solutions.

Actuaries Beware: Pricing Cyber Risk Is a Different Ballgame

Growth in the cyber insurance market has recently occurred at warp speed, with more than 60 companies writing in the U.S. alone and with market premiums amounting to approximately $2.5 billion annually. The impressive year-over-year growth is expected to continue into the foreseeable future, with a variety of estimates placing market premium between $7.5 billion and $20 billion by the end of 2020.

This impressive premium growth is because of several factors — perhaps most notably, reporting of the various types of cyber attacks in the news on a regular basis, driving both awareness and fear. Not surprisingly, cyber risk has become a board-level concern in today’s increasingly connected world. Additionally, recent growth of the Internet of Things has given rise to the seemingly infinite number of attack vectors affecting every industry. Individuals and entities of any size, spanning all regions of the world, are potential victims.

The apparent need for new apps and devices that link to one another without focus toward security of those apps or devices gives reason to worry. It also creates an immediate need for a suite of security analytics products that helps insurance companies write cyber insurance more confidently.

State of Data

Actuaries are creative and intelligent problem solvers, but this creativity and intelligence is tested thoroughly when pricing cyber insurance. Actuaries still need the same suite of products used within any other catastrophe-exposed lines of business, but there are many challenges and complications with respect to cyber insurance that make this a particularly difficult task. That is, we still need an underwriting tool, an individual risk-pricing tool and a catastrophe-aggregation model, but certain aspects of these tools vary significantly from what we’ve seen in the past or have grown accustomed to as actuaries.

Data lies at the center of any actuarial project, but data in this space is very limited for a number of reasons. To consider why this is the case, let’s take a step back and consider the wider context. We first want to think about both how to define the cyber peril and what types of attacks are possible.

Risks could lie anywhere between smaller attacks on individuals involving brute-force attempts to steal credentials and conduct identity theft; and state-sponsored attacks on another government entity involving both physical damage and theft of critically sensitive intelligence. We may see malware deployed on a commonly used piece of software or hardware at a massive scale; infrastructures or processes taken down using denial of service; or a breach of a popular database or platform that affects many entities simultaneously.

Many of the attack variants in this hypothetical list have never happened, and some may never happen. Even within those that have happened, information pertaining to the breach — both in terms of the attack specifics used or the actual dollar impact of the attack — is hard to come by.

Several third-party data sources are currently available, but they tend to concentrate primarily on those pieces of data or attack types that are most accessible — particularly data breach and privacy violation claims. This, naturally, is a very small subset of what we need to price for as actuaries.

Unfortunately, there is fairly loose regulation around the reporting of different types of attacks. Even within the data breach family, there exists tremendous lack of standardization across states with respect to reporting. Criteria for whether a report is required may include whether the data is encrypted, how many people were actually affected by the breach and the type of data stolen (PHI, PII, PCI, etc.).

See also: How Actuaries Can Be Faster, More Efficient  

External research can be done on public sources to find the aggregate amount of loss in some cases, but there is little to no incentive for the breached entity to provide more information than is absolutely required. Thus, while we want to price data breach events at a very granular level, it’s often difficult to obtain dollar figures at this level. For instance, a data breach will lead to several costs, both first party and third party. A breached entity, at minimum, will likely have to:

  • Notify affected customers;
  • Offer credit monitoring or identity-theft protection to those affected;
  • Work with credit card companies to issue new credit cards;
  • Foot bills associated with legal liability and regulatory fines; and
  • Endure reputational damage.

It’s impractical to assume that a breached entity would find it attractive to publicize the amount lost to each of these individual buckets.

Worse, other events that either don’t require reporting or have never happened clearly give us even less to work with. In these cases, it’s absolutely critical that we creatively use the best resources available. This approach requires a blend of insurance expertise, industry-specific knowledge and cyber security competence. While regulation will continue to grow and evolve — we may even see standardization across both insurance coverages offered and reporting requirements by state or country — we must assume that in the near future, our data will be imperfect.

Actuarial Challenges

Though many companies have entered the cyber insurance space, very few are backed by comprehensive analytics. Insurers eager to grab market share are placing too much emphasis on the possibility of recent line profitability continuing into the future.

The problem here is obvious: Cyber insurance needs to be priced at a low loss ratio because of catastrophic or aggregation risk. Once the wave of profitability ends, it could do so in dramatic fashion that proves devastating for many market participants. The risk is simply not well understood across the entirety of the market, and big data analytics is not being leveraged enough. In addition to the glaring data and standardization issues already discussed, actuaries face the following eight key challenges:

1. No Geographical Limitation

On the surface, the cyber realm poses threats vastly different from what we’ve seen in other lines of business. Take geography. We are used to thinking about the impact of geography as it pertains to policyholder concentration within a specific region. It’s well understood that, within commercial property insurance, writers should be careful with respect to how much premium they write along the coast of Florida, because a single large hurricane or tropical storm can otherwise have an absolutely devastating effect on a book of business. Within the cyber world, this relationship is a bit more blurry.

We can no longer just look at a map. We may insure an entity whose server in South Africa is linked to an office in Ireland, which, in turn, is linked to an office in San Francisco. As existing threat actors are able to both infiltrate a system and move within that system, the lines drawn on the map have less meaning. Not to say they’re not important — we could have regulatory requirements or data storage requirements that differ by geography in some meaningful way — but “concentration” takes a different meaning, and we need to pay close attention to the networks within a company.

2. Network Risk From an External Perspective

In the cyber insurance line, we need to pay attention to the networks external to an insured company. It’s well documented that Target’s data breach was conducted through an HVAC system. By examining Target’s internal systems alone, no one would have noticed the vulnerability that was exploited.

As underwriters and actuaries, we need to be well aware of the links from one company to another. Which companies does an insured do business with or contract work from? Just as we mentioned above with apps and devices that are linked, the network we are worried about is only as strong as the weakest link. Another example of this is the recent attacks on a Bangladeshi bank. Attackers were able to navigate through the SWIFT system by breaching a weaker-than-average security perimeter and carrying out attacks spanning multiple banks sharing the same financial network.

3. Significance of the Human Element

Another consideration and difference from the way we traditionally price is the addition of the human element. While human error has long been a part of other lines of business, we have rarely considered the impact of an active adversary on insurance prices. The one exception to this would be terrorism insurance, but mitigation of that risk has been largely assisted by TRIA/TRIPRA.

However, whenever we fix a problem simply by imposing limits, we aren’t really solving the larger problem. We are just shifting liability from one group to another; in this case, the liability is being shifted to the government. While we can take a similar approach with cyber insurance, that would mean ultimately shifting the responsibility from the insurers to the reinsurers or just back to the insureds themselves. The value of this, to society, is debatable.

See also: Cyber Insurance: Coming of Age in ’17?  

A predictive model becomes quite complex when you consider the different types of potential attackers, their capabilities and their motivations. It’s a constant game of cat and mouse, where black hat and white hat hackers are racing against each other. The problem here is that insurers and actuaries are typically neither white hat nor black hat hackers and don’t have the necessary cyber expertise to confidently predict loss propensity.

4. Correlation of Attacks

In attempting to model the “randomness” of attacks, it is important to think about how cyber attacks are publicized or reported in the news, about the reactions to those attacks and the implications on future attacks. In other words, we now have the issue of correlation across a number of factors. If Company A is breached by Person B, we have to ask ourselves a few questions. Will Company A be breached by Person C? Will Person B breach another company similar to or different from Company A? Will Person D steal Person B’s algorithm and use it on entirely different entity (after all, we’ve seen similar surge attacks within families such as ransomware)? If you as the reader know the answers to these questions, please email me after reading this paper.

5. Actuarial Paradox

We also have to consider the implications on the security posture of the affected entity itself. Does the attack make the perimeter of the affected company weaker, therefore creating additional vulnerability to future attacks? Or, alternatively, does the affected company enact a very strong counterpunch that makes it less prone to being breached or attacked in the future? If so, this poses an interesting actuarial dilemma.

Specifically, if a company gets breached, and that company has a very strong counterpunch, can we potentially say that a breached company is a better risk going forward? Then, the even-more-direct question, which will surely face resistance, is: Can we charge a lower actuarial premium for companies that have been breached in the past, knowing that their response to past events has actually made them safer risks? This flies directly in the face of everything we’ve done within other lines of business, but it could make intuitive sense depending on incident response efforts put forth by the company in the event of breach or attack.

6. Definition of a Cyber Catastrophe

Even something as simple as the definition of a catastrophe is in play. Within some other lines of insurance business, we’re used to thinking about an aggregate industry dollar threshold that helps determine whether an incident is categorized as a catastrophe. Within cyber, that may not work well. For instance, consider an attack on a single entity that provides a service for many other entities. It’s possible that, in the event of a breach, all of the liability falls on that single affected entity. The global economic impact as it pertains to dollars could be astronomical, but it’s not truly an aggregation event that we need to concern ourselves with from a catastrophe modeling perspective, particularly because policy limits will come into play in this scenario.

We need to focus on those events that affect multiple companies at the same time and, therefore, provide potential aggregation risk across the set of insureds in a given insurance company’s portfolio. This is, ultimately, the most complicated issue we’re trying to solve. Tying together a few of the related challenges: How are the risks in our portfolio connected with each other, now that we can’t purely rely on geography? Having analytical tools available to help diagnose these correlations and the potential impacts of different types of cyber attacks will dramatically help insurers write cyber insurance effectively and confidently, while capturing the human element aspect of the threats posed.

7. Dynamic Technology Evolution

If we can be certain of one thing, it’s that technology will not stop changing. How will modelers keep up with such a dynamic line of business? The specific threats posed change each year, forcing us to ask ourselves whether annual policies even work or how frequently we can update model estimates without annoying insurers. Just as we would write an endorsement in personal auto insurance for a new driver, should we modify premium mid-term to reflect a newly discovered specific risk to an insured? Or should we have shorter policy terms? The dynamic nature of this line forces us to rethink some of the most basic elements that we’ve gotten used to over the years.

8. Silent Coverage

Still, all of the above considerations only help answer the question of what the overall economic impact will be. We also need to consider how insurance terms and conditions, as well as exclusions, apply to inform the total insurable cost by different lines of insurance. Certain types of events are more insurable, some less. We have to consider how waivers of liability will be interpreted judicially, as well as the interplay of multiple lines of business.

It’s safe to assume that insurance policy language written decades ago did not place much emphasis on cyber exposure arising from a given product. In many cases, silent coverage of these types of perils was potentially entirely accidental. Still, insurers are coming to grips with the fact that this is an ever-increasing peril that needs to be specifically addressed and that there exists significant overlap across multiple lines of business. Exclusions or specific policy language can, in some cases, be a bit sloppy, leading to confusion regarding which product a given attack may actually be covered within. This becomes the last, but not least, problem we have to answer.

Conclusion

The emerging trends in cyber insurance raise a number of unique challenges and have forced us to reconsider how we think about underwriting, pricing and aggregation risk. No longer we can pinpoint our insureds on a map and know how an incident will affect the book of business. We need to think about both internal and external connections to an insured entity and about the correlations that exist between event types, threat actors and attack victims. In cases when an entity is attacked, we need to pay particular attention to the response and counterpunch.

As the cyber insurance market continues to grow, we will be better able to determine whether loss dollars tend to fall neatly within an increasing number of standalone cyber offerings or whether insurers will push these cyber coverages into existing lines of business such as general liability, directors and officers, workers’ compensation or other lines.

Actuaries and underwriters will need to overcome the lack of quality historical data by pairing the claims data that does exist with predictive product telemetry data and expert insight spanning insurance, cyber security and industry. Over time, this effort may be assisted as legislation or widely accepted model schema move us toward a world with standardized language and coverage options. Nonetheless, the dynamic nature of the risk with new adversaries, technologies and attack vectors emerging on a regular basis will require monitored approaches.

See also: Another Reason to Consider Cyber Insurance  

In addition, those that create new technology need to realize the importance of security in the rush to get new products to market. White hat hackers will have to work diligently to outpace black hat hackers, while actuaries will use this insight to maintain up-to-date threat actor models with a need for speed unlike any seen before by the traditional insurance market.

Some of these challenges may prove easier than they appear on paper, while some may prove far more complicated. We know actuaries are good problem solvers, but this test will be a serious and very important one that needs to be solved in partnership with individuals from cyber security and insurance industries.

Where is Real Home for Analytics?

One of the fascinating aspects of technology consulting is having the opportunity to see how different organizations address the same issues. These days, analytics is a superb example. Even though every organization needs analytics, they are not all coming to the same conclusions about where “Analytics Central” lies within the company’s structure. In some carriers, marketing picked up the baton first. In others, actuaries have naturally been involved and still are. In a few cases, data science started in IT, with data managers and analytical types offering their services to the company as an internal partner, modeled after most other IT services.

In several situations that we’ve seen, there is no Analytics Central at all. A decentralized view of analytics has grown up in the void – so that every area needing analytics fends for itself. There are a host of reasons this becomes impractical, so often we find these organizations seeking assistance in developing an enterprise plan for data and analytics. This plan accounts for more than just technology modernization and nearly always requires some fresh sketches on the org chart.

Whichever situation may represent the analytics picture in your company, it’s important to note that no matter where analytics begins or where it currently resides, that location isn’t always where it is going to end up.

Ten years ago, if you had asked any senior executive where data analytics would reside within the organization, he or she would likely have said, “actuarial.” Actuaries are, after all, the original insurance analytics experts and providers. Operational reporting, statistical modeling, mortality on the life side and pricing and loss development on the P&C side – all of these functions are the lifeblood that keep insurers profitable with the proper level of risk and the correct assumptions for new business. Why wouldn’t actuaries also be the ones to carry the new data analytics forward with the right assumptions and the proper use of data?

Yet, when I was invited to speak at a big data and analytics conference with more than 100 insurance executives and interested parties recently, there was not one actuary in attendance. I don’t know why — maybe because it was quarter-end — but I can only assume that, even though actuaries may want to be involved, their day jobs get in the way. Quarterly reserve reviews, important loss development analysis and price adequacy studies can already consume more time than actuaries have. In many organizations, the actuarial teams are stretched so thin they simply don’t have the bandwidth to participate in modeling efforts with unclear benefits.

Then there is marketing. One could argue that marketing has the most to gain from housing the new corps of data scientists. If one looks at analytics from an organizational/financial perspective, marketing ROI could be the fuel for funding the new tools and resources that will grow top-line premium. Marketing also makes sense from a cultural perspective. It is the one area of the insurance organization that is already used to blending the creative with the analytical, understanding the value of testing methods and messages and even the ancillary need to provide feedback visually.

The list of possibilities can go on and on. One could make a case for placing analytics in the business, keeping it under IT, employing an out-of-house partner solution, etc. There are many good reasons for all of these, but I suspect that most analytics functions will end up in a structure all their own. That’s where we’ll begin “Where is the Real Home for Analytics, Part II” in two weeks.

How Risk Management Drives up Profits

Diane Meyers, director of corporate insurance for YRC Worldwide, manages the insurance and associated risks of one of the most hazard-prone industries in the world – trucking. YRC is the largest long-haul trucking company in U.S., operating in all 50 states and Canada. It has 14,500 tractors and 46,500 trailers and ships 70% of all transported cargo throughout the U.S. each year. YRC’s origins trace back to 1924 to the Akron, Ohio-based company Yellow Cab Transit before the independent trucking companies of Yellow, Roadway, Reimer and others were combined in 2009 into the YRC banner.

I asked Diane about her biggest challenges in managing the risks associated with the YRC fleet, including 32,000-plus employees (a number that has grown in busy times to more than 50,000) and 400 physical locations. She said her top three hot buttons are: collateral, collateral and collateral.

For anyone familiar with high-deductible or self-insured workers’ comp programs, insurers and state governments rely on a company’s posted collateral (aka security deposit) as the financial backstop should the company go bankrupt or default in its obligations. Companies with high-risk jobs can experience workers’ comp costs that can easily be 400% to 500% greater than white collar jobs. Posted collateral needs to cover the costs expected to be associated with the life of each claim and can be a huge drain for any company, including YRC.

Diane, who reports to the treasurer, says YRC negotiates collateral requirements with one excess workers’ comp insurer for its high-deductible program in 24 states. Collateral is typically posted using LOCs (letters of credit) or surety bonds. YRC’s self-insured program in the remaining 26 states means meeting the collateral demands of their 26 separate governing entities.

Meeting with the YRC’s carrier’s actuary along with her own actuary every three months, Diane also has to deal with each state at least annually. “Working with multiple sets of actuaries is a whole other challenge, since I have to educate them on the realities of our own workers’ comp program and its achievements, like return-to-work,” she says. “Besides that, in working with actuaries, I have to speak their language and understand how they work their crystal ball.”

Diane added: “These are monies that are tied up for decades to come that cannot otherwise be used for our company’s operations. I have to find ways to save the company from the ever-changing collateralization demands through ongoing, complex negotiations with insurers and regulators. Safety and loss control programs have to demonstrate traction and real savings to our workers’ comp and liability exposures.” Diane noted that safety is so important that each YRC operating division has its own safety department.

As with most large companies, YRC is self-insured for most of its liability risks. To assist Diane with vehicle and general liability claims, YRC uses its own, as well as outsourced, legal counsel to manage risks up to its retention level. There are also a myriad of state and federal rules and regulations regarding long-haul trucking that require strict adherence and attention to changes.

When asked about her unique challenges at YRC, Diane said, “I have to understand the legal demands and expectations of all 50 states, Canada, and D.C.”

She also faces the complexity of working with a corporation that has grown through acquisitions of older companies. To find key claim-related data, she says, “I have had to go through various insurance policies and records of the companies we acquired going back as far as the ’60s!”

With the ever-changing demands for long-haul transportation by various industries, YRC experiences significant fluctuations in its workforce. There have been times when the workforce has expanded more than 50%, and, during recessions, there have been significant reductions. A swing either way can create huge risk management challenges, especially when there are continuing workers’ comp claims to deal with. This is made even tougher because most of YRC’s employees are in the Teamsters union, and some issues could require collective bargaining or at least close communication and cooperation between labor and management.

Modernization: Actuaries Must, Too

To effectively produce a variety of new financial reporting, reserving and risk metrics, actuarial departments will need to modernize with new tools, hardware, processes and skills. This will be a significant undertaking, especially considering how most organizations and regulatory environments are constantly changing. Re-engineering projects will require careful planning and will affect people, processes and technology. Developing a modernization strategy that provides a path to real change includes visualizing a compelling future state, articulating and communicating expectations, defining a roadmap with achievable goals and avoiding overreach during the implementation.

Case for change

The insurance market has changed significantly in recent years, which has had a particularly pronounced effect on how companies operate, meet internal and external demands, report externally and comply with regulations. However, many insurers have not modernized their actuarial functions to keep pace with these changes and are struggling to effectively meet not just existing demands but also impending ones.

Specifically, drivers of actuarial modernization include:

  • Internal drivers – The audit committee seeks assurance that reserves and risk-based capital are sufficient and being determined in a well-controlled environment. Senior management wants actuarial departments that work toward the same strategic goals as the rest of the company. Business units are looking for trusted actuarial advisers who can collaborate effectively with them, as well as develop practical solutions to complex problems to help them meet their business objectives. Lastly, the finance department needs timely insight into how the reserve movements affect earnings and equity.
  • External drivers – The need to issue financial reports under multiple accounting bases necessitates the adoption of new processes as well as the collection of additional data. Similarly, regulatory requirements have mandated additional analyses, various views of the book of business and a push toward more forward-looking information. Other external parties, including investors and rating agencies, demand more information with a greater degree of transparency than ever before.

The modernized actuarial function

In a modernized company, the actuarial, finance, risk and IT functions have clearly defined, collective expectations and utilize common, efficient processes. More specifically, the following characterizes a modernized actuarial function:

  • Data – The organization, with significant actuarial input, clearly defines its data strategy via integrated information from commonly recognized sources. The goal of this strategy is information that users can extract and manipulate with minimal manual intervention at a sufficient level of detail to allow for on-demand analysis.
  • Tools and technology – Tools and technology enhance the effectiveness of the actuarial department by delivering information faster, more accurately and more transparently vs. the traditional, ad hoc computing done by end users. Specifically, tools that use data visualization can more effectively convey trends and results to management. Algorithms can be programmed to automate first-cut reserving and other actuarial analyses each reporting period based on rules that can help point staff to business segments that may require deeper analysis in the quarter.
  • Methods and analysis – Modernized actuarial organizations enhance traditional actuarial methodologies with additional cutting-edge methods that yield superior insights (e.g., predictive analytics, which have transformed personal lines pricing and are being adopted in the commercial arena). Another example is stochastic analysis, which enhances deterministic approaches with statistical rigor, helps actuaries prepare transparent reserve range indications and enables management to better understand uncertainties.
  • Processes – Operations are reviewed from the top down and well-defined in terms of controls, responsibilities, timing,and outputs, particularly in the quarter-close procedures for reserves. Automation of key processes is a primary organizational objective. Modernized actuarial organizations have streamlined processes that eliminate unnecessary or excessive evaluation.
  • Organizational structure – The ability to deliver superior business intelligence to management often depends on how an organization uses its actuarial resources. Many companies are debating the merits of centralized, decentralized or hybrid organizational structures. While each structure has its own set of advantages and disadvantages, organizational structure is not the most vital factor in a function’s success. Rather, it is much more important that actuaries serve as trusted advisers to company stakeholders while fostering a culture of innovative thinking that identifies new information and opportunities to test, learn and scale.
  • Reporting and governance – Strong governance, particularly around data, analysis review, challenge, issue escalation and resolution and reporting are cornerstones of a modernized actuarial function. Actuarial functions solicit stakeholder input on information demands and provides consistent, quarterly reporting packs that satisfy these stakeholders’ reporting demands. Additionally, modernized actuarial functions have formal policies and procedures that clarify the roles and responsibilities of management, reserve committees and the audit committee.
  • Business intelligence – Modernized actuarial functions focus on providing operational metrics that meet individual stakeholder needs and objectively relate business performance. For example, senior managers often desire corporate dashboards that provide them access to real-time information on business performance to help them make strategic decisions.

Benefits of insurance modernization

A modernized actuarial function produces insightful, strategic information and allows the actuarial function to deliver the value management desires while meeting external stakeholders’ regulations and demands. Modernized actuarial functions have robust feedback loops within pricing, reserving and capital management.  Additionally, the modernized actuarial function understands the business’ fundamental performance and takes an active role in helping management define the company’s future direction.

Modernization represents a fundamental shift in the actuarial function priorities. Traditionally, the actuarial function has provided a retrospective look at business performance despite various data, technology, process and personnel limitations. However, modernization seeks to address these limitations and allow the actuarial function the freedom to innovate, dig deeper into the business, provide forward-looking insights and have a strategic partnership with management.

At modernized insurers, tailored reports direct actuaries’ attention to portfolios with unusual characteristics. Automated programs quickly populate various templates for additional ad hoc analyses and drill-down investigation. Data visualization tools provide management comfort with findings and remediation recommendations. Cross- functional reporting and implementation teams fluidly improve on-the-ground results. Research features exploratory environments (“sandboxes”) and widespread data access that helps innovators discover emerging trends early, leading to potential differentiators. As an added benefit, actuarially modernized functions operate at a much higher level of efficiency, with greatly reduced levels of time needed for manual processing and data manipulation.

Factors for successful modernization/ key considerations

The thought of overhauling entire systems, processes and functional areas may feel overwhelming for company executives. This is understandable, as comprehensive modernization is a long journey that likely will have a significant price tag. As a result, many companies address modernization in steps. Although, in an ideal world with limitless resources, modernization could occur in a “big bang,” there is significant value in first addressing the areas in most need of modernization (while maintaining an overarching focus on holistic modernization). As one area becomes more streamlined and efficient, other areas will start to reap the benefits.

Regardless of the breadth of modernization initiatives, modernization strategies will require a holistic consideration of data, methods and analyses, tools and technology, actuarial processes and human capital requirements.  These strategies will also need to address the business and operational changes necessary to deliver new business intelligence metrics. Any weak links between these closely connected components will limit the realization of actuarial modernization.

Although a modernization strategy should be holistic to avoid “digging up the road multiple times,” it is possible to tackle modernization issues in logical, progressive ways.

Achieving the vision

The first step is a comprehensive assessment of current processes and identifying the areas in greatest need of modernization. If we consider each modernization dimension (e.g., data, processes, technology, etc.) as a gear in motion, the first step to modernization  involves identifying which gear in the function does not work in concert with the others. Stakeholders should collaborate on creating a comprehensive plan of action with an objective view of the dimensions that require immediate attention, while keeping in mind how each gear affects the organization as a whole.