Tag Archives: actuaries

Achieving Innovation in a Regulated Industry

Achieving innovation in an industry that is heavily regulated can be challenging. First of all, regulation by definition imposes restrictions on what is allowed — for good reason, in many cases. Additionally, there are direct costs associated with regulation, and spending more on achieving compliance may mean fewer resources are available to invest in innovation. Regulation may also foster a way of thinking and culture that are counterproductive to truly revolutionary innovation.

The Impetus for Change

The COVID-19 pandemic has accelerated the digitization agenda for insurance companies. There are real challenges with applying traditional means for executing a sale, performing initial underwriting or assessing a claim in an era of social distancing. As individuals and companies have done their best to adhere to necessary precautions, the need for insurers to accelerate their digitization journey has emerged.

The long-term impacts of the COVID-19 pandemic on customer expectations are still unknown, but an important trend to date is an uptick in e-commerce. This trend is unlikely to dissipate — at least in the short term — so insurers need to find ways to respond to changing customer behaviors and expectations. This is a good time for insurers to take a step back and think about how best to set themselves up to achieve their innovation goals.

Insurance regulation is broad. For example, it mandates that insurers are appropriately set up and licensed, that the products sold are appropriate and that insurers maintain an appropriate level of financial health.

Regulation builds public confidence, which is critical for an industry that essentially sells a promise to fulfill a future obligation. Without public confidence, we simply don’t have an industry.

Some argue that insurers have fallen behind most other industries when it comes to achieving digital transformation. Even compared with other financial services that have accelerated their digitization transformation — such as banking, personal savings and investments — the insurance industry has lagged. The banking and investment sectors have sought to rethink how they do business and how they engage with their customers, and they have implemented end-to-end integrated solutions that ensure a seamless customer experience.

For the most part, insurance companies have focused on moving what they currently do to a digital platform, as opposed to rethinking what they currently do and how they do it. (There are, of course, challenges given the personal and emotional nature of buying insurance, particularly life insurance.

Types of Innovation

Broadly speaking, innovation can be divided into two categories:

  • Incremental. As the name suggests, incremental innovation is a gradual build-up of little improvements that result in better, faster and cheaper performance.
  • Breakthrough. This category is revolutionary and often disrupts the industry. Breakthrough innovation involves a complete reimagining of what is possible.

Insurers should consider taking different approaches to achieve their objectives for incremental innovation and breakthrough innovation, but, for both, it’s important to create an innovation strategy that is aligned to the broader company strategy and risk appetite. As an example, to the extent that an insurer’s competitive advantage is its underwriting capabilities, then perhaps that should be the focus for its innovation strategy. The company might double down on the future of underwriting — generating a move from initial underwriting to continuous underwriting, fluid-free underwriting and so on.

See also: COVID-19 Highlights Gaps, Opportunities

Incremental innovation is best achieved through internal efforts because making gradual improvements to existing practices often requires a deep understanding and appreciation of existing practice — what we do, how we do it and why we do it the way we do. Effective collaboration among internal research and development (R&D) teams and the business can generate appropriate returns on incremental innovation.

An internally led effort does not mean the absence of external resources, however. To the contrary, external resources can complement internally led efforts and may provide much-needed subject-matter expertise or offer skills or experience that may not be available internally. External resources also can be used to provide necessary bandwidth that may not exist on the internal team.

Incremental innovation often is done within the constraints of existing regulation. Gains from this form of innovation are generally moderate at best, but the burden of regulation doesn’t tend to overly inhibit progress.

Breakthrough innovation, on the other hand, is best achieved through externally led efforts. This is particularly true for industries that are heavily regulated, given that the culture can counterproductive for revolutionary innovation.

The emergence of insurtech firms, which are often led by individuals from other industries, provides a breath of fresh air. These companies and individuals can help traditional insurance carriers reimagine what is possible, because they are not inhibited by years of insurance industry knowledge and experience of how things have always been done. They are free to think of an ideal future state and use that as a starting point for a new solution.

One of the challenges of breakthrough innovation is that regulations often must be changed. That means demonstrating a benefit to policyholders, improving the stability of an insurance company or providing a benefit to the industry as a whole. And that takes time. Companies committing to breakthrough innovation may be committing to a notable investment that requires partnering with insurtech firms or leveraging innovations from other industries.

The first wave of insurtech firms was a source of dread for incumbent insurers. However, what could be termed “Insurtech 2.0” today is largely the exploration of partnerships between insurtech firms and incumbents.

Another example of breakthrough innovation being achieved through externally led efforts is in the form of industry groups or collaborations. A good example is the Blockchain Insurance Industry Initiative (B3i) consortium that is owned by a group of more than 40 (re)insurers. This consortium tries to help deliver better solutions for consumers through faster access to insurance with less administrative cost.

Opportunities exist to explore breakthrough innovation for the insurance industry as a whole through further collaboration. Innovations developed through industry groups may be more effective at getting regulatory buy-in, especially where tweaks to existing regulation are needed.

Expanding the Role of the Actuary

Keeping the consumer in mind should be at the heart of any breakthrough innovation strategy. Technological advances and new sources of data make new customer engagement models possible. Actuaries working in traditional roles at insurers often have been far removed from the end consumer and mostly focused on back- or middle-office activities. However, technological advances can blur the lines between front- and middle-office activities. For example, moving from initial underwriting models that most insurers use today to a continuous underwriting model will blur these lines. There are opportunities for actuaries at insurance companies to get closer to the end consumer and expand their role.

Tesla’s approach to innovation includes having engineers front and center in the design process. Its engineers work closely with the design team to develop an appropriate product for the consumer, instead of the traditional approach of using an iterative process where the designers create something only to later test it with the engineers for feasibility. Tesla found its approach, often referred to as “design thinking,” to be a more effective process.

One can think of actuaries as the engineers of an insurance company, and we can be more involved in the design process when the end consumer is being considered. This expands the role of the actuary to front-office activities, which in turn can increase the speed of innovation in the industry.

See also: How to Outperform on Innovation


The challenges to achieving innovation in a heavily regulated industry like insurance can be overcome by identifying the different types of innovation and establishing the appropriate strategy for each. Incremental innovation is best achieved through internally led efforts, while breakthrough innovation is best achieved through externally led efforts. Externally led efforts for insurers may occur through partnerships with insurtech firms and industrywide collaborations. But remember, any innovation strategy must be aligned with a company’s overall strategy and risk appetite.

This is an exciting era for insurance, and actuaries have an opportunity to expand and redefine their roles at an insurer in these changing times.

This article first appeared in The Actuary magazine online, January 2021

Graph Theory, Network Analysis Aid Actuaries

Most traditional insurers find it overwhelming to transform the innumerable sensitive actuarial processes needed for day-to-day functioning. This problem is amplified by most insurance actuaries spending most of their time on secondary activities, such as data reconciliation, rather than focusing on core actuarial tasks such as modeling, strategy development and root cause analysis. These secondary activities are usually low-value, repeatable and time-consuming tasks. 

It’s crucial to understand that, unlike other insurance processes, actuarial processes are complex and time-consuming and have a high number of touchpoints. Dynamic, frequently changing regulations can make these processes even more complicated.

For instance, the New York Department of Financial Services (NYDFS) published its Circular Letter Number 1 in 2019 on the use of big data in underwriting life insurance. The NYDFS states that “an insurer should not use external data sources, algorithms or predictive models in underwriting or rating unless the insurer has determined that the processes do not collect or utilize prohibited criteria and that the use of the external data sources, algorithms or predictive models are not unfairly discriminatory.”

This presents a need for full transparency to explain the variables computed and their effects, as well as a need for efficiency so that actuaries spend their time on analysis rather than data reconciliation. Other priorities will depend on the processes. For example, pricing and ALM modeling processes require greater flexibility and transparency, whereas valuation and economic projection models require more precision and prioritize governance over flexibility and transparency.

Irrespective of the modeling processes, legacy source systems, fragmented data, error-prone manual processes and a lack of data standardization lead to problems within actuarial organizations. Analyzing actuarial processes is quite complex due to the interdependencies and relationships of subtasks and files. With advancements in the field of artificial intelligence (AI) and machine learning (ML), copious amounts of data can be processed quite efficiently to identify hidden patterns. Network analysis is widely used in other domains to analyze different elements of a network. Within insurance, it can be applied for fraud detection and marketing. This paper describes an approach where network analysis is leveraged for actuarial process transformation. 

A Coming Science: Graphs and Network Analysis

Graph and network analysis helps organizations gain a deep understanding of their data flows, process roadblocks and other trends and patterns. The first step for graph and network analysis involves using tools to develop visual representations of data to better understand the data. The next step consists of acting on this data, typically by carefully analyzing graph network parameters such as centrality, traversal and cycles.

A graph is a data structure used to show pairwise relationships between entities. It consists of a set of vertices (V) and a set of edges (E). The vertices of a graph represent entities, such as persons, items and files, and the edges represent relationships among vertices. 

Graphs can be directed or undirected. An undirected graph (Figure 1) is where there is a symmetric relationship between nodes (A to B implies B to A), whereas a directed graph (Figure 2) is asymmetric. In the case of process improvements, the dependencies of one task or file with the others in the process need to be modeled. The relationship is asymmetric, and therefore should be modeled through a directed graph. 

See also: Big Changes Coming for Workers’ Comp

Network Analysis Basics and Process Improvements

Graphs provide a better way of dealing with the dependencies in the various data files, data systems and processes. Once any process is represented as a graph, there are multiple operations and analyses that can be performed. For instance, influencer nodes can be easily identified using centrality measures. Similarly, cycles, cliques and paths can be traced along the network to optimize flow. Network analysis helps assess the current state of processes to identify gaps or redundancies and determine which processes provide maximum value. 

Three key analyses are the most important in any process improvement framework:

  1. Identifying process and data nodes that are crucial in the network 
  2. Tracing from the input to the output in the processes to identify touchpoints
  3. Identifying cyclical references and dependencies in the network and making the flow linear

1. Influential Nodes: Centrality

Centrality measures the influence of a node in a network. As a node’s influence can be viewed differently, the right choice of centrality measures will depend on the problem statement. 

  • Degree Centrality: Degree centrality measures influence based on the number of incoming and outgoing connections of a node. For a directed network, this can be further broken down into in-degree centrality for incoming connections, and out-degree centrality for outgoing connections.
  • Between-ness Centrality: Between-ness centrality measures the influence of a node over the information flow of a network. It assumes that the information flows through the shortest path and captures the number of times a particular node appears in that path. 

These different centrality measures can be used to derive insights about a network. While degree centrality defines strength as the number of neighbors, between-ness centrality defines strength as control over information passing between other neighbors through the node. Nodes that are high in both degrees are the influential nodes in the network. 

2. Graph traversal

Graph traversals are used to understand the flow within the network. They are used to search for nodes within a network by passing through each of the nodes of the graph. Traversals can be made to identify the shortest path or to search for connected vertices in a graph. The latter is of particular importance for making actuarial process improvements. Understanding the path of data throughout the process can help evaluate the process holistically and identify improvement opportunities.

3. Cliques and Cycles

A clique is a set of vertices in an undirected graph where every two distinct vertices are connected to each other. Cliques are used to find communities in a network and have varied applications in social network analysis, bioinformatics and other areas. For process improvement, cliques find an application in identifying local communities of processes and data. For directed graphs, finding cycles are of great importance in process improvement, as insights mined from investigating cyclical dependencies can be quite useful. 

Step Approach for an Actuarial Transformation Using Graph Theory

1. Understanding the Scope of Transformation

Understanding the scope of transformation is of key importance. The number of output touchpoints and files used by the organization is often significantly less than the number of files produced. Moreover, due to evolving regulations, actuarial processes can undergo changes. Some of the key questions to answer at this stage include: 

  • Which processes are in the scope of the transformation?
  • Will these processes undergo changes in the near future due to regulations (US GAAP LDTI/IFRS 17)? 
  • Are all the tasks and files for the chosen process actually required, or is there a scope for rationalization?

2. Understanding Data Flow

Once the scope of the transformation is defined, data dependencies need to be traced. Excel links, database queries and existing data models need to be analyzed. In some cases, manually copying and pasting the data creates breaks in the data flow. In such cases, the analyst needs to fill in the gaps and create the end-to-end flow of the data. Some key aspects to consider at this stage are: 

  • What are the data dependencies in the process?
  • Are there breaks in the data flow due to manual adjustment?
  • What are the inputs, outputs and intermediate files? 

3. Implementing the Network of Files

After mapping the data flow, the graph network can be constructed. The network can then be analyzed to identify potential opportunities, identify key files, make data flows linear and create the goal state for the process. The key analysis to perform at this stage are:

  • Identifying important nodes in the network through degree measures
  • Capturing redundant intermedia files in the system
  • Capturing cyclical-references and patterns in the process

Based on the analysis of the network, bottlenecks and inconsistencies can be easily identified. This information can lead to process reengineering and end-to-end data-based process transformation. The results can be validated with business users, and changes can be made. The figures below show some of the patterns that can be captured using network graphs. The input, intermediate and output nodes are color-coded as blue, grey and red respectively.  

The Benefits of Actuarial Process Transformation Using Graph Theory

Due to the inherent complexity of actuarial processes, decomposing process and data flows can be difficult. While analyzing any actuarial sub-process at the lowest level of granularity, it is quite possible to discover multiple related files with lots of related calculations. Moreover, a major challenge quite common in actuarial processes is a lack of data documentation. Graph theory enables insurers to overcome these challenges: 

  • Creating a Data Lineage From Source System to Output: Graph networks help improve the quality of data feeding into subsequent sub-processes. This benefits actuaries, as higher-quality data produces better models regardless of the techniques being employed
  • Improved Visualization and Bottleneck Identification: Graph networks help visualize the relationship between various databases. The networks also help build a foundation for a data factory that not only creates a 360-degree view of useful information, enables data visualization and enables future self-service analytics. Moreover, several analyses can identify process bottlenecks that can be investigated further.
  • Enabling Flexibility and Governance: On the surface, flexibility and governance may sound like competing priorities. Increased flexibility makes it difficult to control what is happening in the process and leads to increased security risks. However, graph theory helps manage governance by visualizing complicated data relationships and helps in maintaining data integrity. 
  • Speed of Analysis: Traditionally, most of the time spent producing models is used to gather, clean and manipulate data. Graph theory helps in driving dependencies, enabling efficient processes and providing quicker results for a given problem. Graph theory can be used to rationalize non-value-adding files or processes, leading to streamlined and automated process flows. By linking the data elements from outputs to source systems, organizations can analyze processes in depth through back propagation. 

Case Example

A major life insurance player in the U.S. engaged EXL to examine its annuities valuation process and identify process improvement opportunities. There were multiple interfaces in the annuities valuation process, and many stakeholders were involved. Regulatory frameworks, a high number of touchpoints, actuarial judgment and manual adjustments made the annuities valuation process complex. Moreover, the client had multiple source systems from which data were pulled. Data came to the actuarial team through SQL servers, data warehouses, Excel, Access databases and flat files. As a result of the data fragmentation, a significant amount of effort was spent on data reconciliation, data validation and data pulls. While some aspects of these steps were automated, many of the processes were manually intensive, wasting actuarial bandwidth. 

EXL deployed a two-speed approach, tackling the problem from a short-term local optimization as well as from a long-term process improvement perspective. The local optimization approach focused on understanding the standard operating procedures for the individual tasks to automate the manual efforts. These optimizations generated quick wins but did not address the overall efficiency and improvement goals per se. 

See also: The Data Journey Into the New Normal

Knowing that there was a possibility of finding multiple tasks that can be rationalized, EXL prioritized and balanced the local and long-term improvements. This included speaking to multiple stakeholders to identify the regulatory GAAP processes for deferred annuities that needed to be focused on in the long term, and what the other processes could be addressed through local optimization. 

For the deferred annuities GAAP process, EXL leveraged network analysis to analyze the file dependencies. Each of the hundreds of process files and tasks were categorized into pure inputs, outputs and intermediates. These files were modeled as nodes in the network, while the data flows were modeled as edges. To capture the data linkages, a Visual Basic Macro (VBM)-based tool was deployed that automatically identified the Excel links and formulae to capture dependencies. Centrality measures were calculated for each of the files and then attached to the node attributes. The centrality measures showed important sub-processes and communities of files. For example, the topside sub-processes ingested more than 20 files and were high on degree centrality. Annual reporting sub-processes were high on degree centrality. 

The team also found 11 avoidable cyclical references for data flows. These data flows were made linear to create the goal process state. Moreover, it was also observed that some of the intermediate files were merely being used to stage the data. These files had basic data checks embedded but did not add a lot of value. These files were rationalized. Network analysis helped in providing an understanding of the data flows and creating the to-be state for process improvement. Moreover, the time required to analyze hundreds of tasks and files was reduced significantly. The team was able to identify an over 30% reduction in effort through a combination of automation and data-based solutions.

Actuaries Beware: Pricing Cyber Risk Is a Different Ballgame

Growth in the cyber insurance market has recently occurred at warp speed, with more than 60 companies writing in the U.S. alone and with market premiums amounting to approximately $2.5 billion annually. The impressive year-over-year growth is expected to continue into the foreseeable future, with a variety of estimates placing market premium between $7.5 billion and $20 billion by the end of 2020.

This impressive premium growth is because of several factors — perhaps most notably, reporting of the various types of cyber attacks in the news on a regular basis, driving both awareness and fear. Not surprisingly, cyber risk has become a board-level concern in today’s increasingly connected world. Additionally, recent growth of the Internet of Things has given rise to the seemingly infinite number of attack vectors affecting every industry. Individuals and entities of any size, spanning all regions of the world, are potential victims.

The apparent need for new apps and devices that link to one another without focus toward security of those apps or devices gives reason to worry. It also creates an immediate need for a suite of security analytics products that helps insurance companies write cyber insurance more confidently.

State of Data

Actuaries are creative and intelligent problem solvers, but this creativity and intelligence is tested thoroughly when pricing cyber insurance. Actuaries still need the same suite of products used within any other catastrophe-exposed lines of business, but there are many challenges and complications with respect to cyber insurance that make this a particularly difficult task. That is, we still need an underwriting tool, an individual risk-pricing tool and a catastrophe-aggregation model, but certain aspects of these tools vary significantly from what we’ve seen in the past or have grown accustomed to as actuaries.

Data lies at the center of any actuarial project, but data in this space is very limited for a number of reasons. To consider why this is the case, let’s take a step back and consider the wider context. We first want to think about both how to define the cyber peril and what types of attacks are possible.

Risks could lie anywhere between smaller attacks on individuals involving brute-force attempts to steal credentials and conduct identity theft; and state-sponsored attacks on another government entity involving both physical damage and theft of critically sensitive intelligence. We may see malware deployed on a commonly used piece of software or hardware at a massive scale; infrastructures or processes taken down using denial of service; or a breach of a popular database or platform that affects many entities simultaneously.

Many of the attack variants in this hypothetical list have never happened, and some may never happen. Even within those that have happened, information pertaining to the breach — both in terms of the attack specifics used or the actual dollar impact of the attack — is hard to come by.

Several third-party data sources are currently available, but they tend to concentrate primarily on those pieces of data or attack types that are most accessible — particularly data breach and privacy violation claims. This, naturally, is a very small subset of what we need to price for as actuaries.

Unfortunately, there is fairly loose regulation around the reporting of different types of attacks. Even within the data breach family, there exists tremendous lack of standardization across states with respect to reporting. Criteria for whether a report is required may include whether the data is encrypted, how many people were actually affected by the breach and the type of data stolen (PHI, PII, PCI, etc.).

See also: How Actuaries Can Be Faster, More Efficient  

External research can be done on public sources to find the aggregate amount of loss in some cases, but there is little to no incentive for the breached entity to provide more information than is absolutely required. Thus, while we want to price data breach events at a very granular level, it’s often difficult to obtain dollar figures at this level. For instance, a data breach will lead to several costs, both first party and third party. A breached entity, at minimum, will likely have to:

  • Notify affected customers;
  • Offer credit monitoring or identity-theft protection to those affected;
  • Work with credit card companies to issue new credit cards;
  • Foot bills associated with legal liability and regulatory fines; and
  • Endure reputational damage.

It’s impractical to assume that a breached entity would find it attractive to publicize the amount lost to each of these individual buckets.

Worse, other events that either don’t require reporting or have never happened clearly give us even less to work with. In these cases, it’s absolutely critical that we creatively use the best resources available. This approach requires a blend of insurance expertise, industry-specific knowledge and cyber security competence. While regulation will continue to grow and evolve — we may even see standardization across both insurance coverages offered and reporting requirements by state or country — we must assume that in the near future, our data will be imperfect.

Actuarial Challenges

Though many companies have entered the cyber insurance space, very few are backed by comprehensive analytics. Insurers eager to grab market share are placing too much emphasis on the possibility of recent line profitability continuing into the future.

The problem here is obvious: Cyber insurance needs to be priced at a low loss ratio because of catastrophic or aggregation risk. Once the wave of profitability ends, it could do so in dramatic fashion that proves devastating for many market participants. The risk is simply not well understood across the entirety of the market, and big data analytics is not being leveraged enough. In addition to the glaring data and standardization issues already discussed, actuaries face the following eight key challenges:

1. No Geographical Limitation

On the surface, the cyber realm poses threats vastly different from what we’ve seen in other lines of business. Take geography. We are used to thinking about the impact of geography as it pertains to policyholder concentration within a specific region. It’s well understood that, within commercial property insurance, writers should be careful with respect to how much premium they write along the coast of Florida, because a single large hurricane or tropical storm can otherwise have an absolutely devastating effect on a book of business. Within the cyber world, this relationship is a bit more blurry.

We can no longer just look at a map. We may insure an entity whose server in South Africa is linked to an office in Ireland, which, in turn, is linked to an office in San Francisco. As existing threat actors are able to both infiltrate a system and move within that system, the lines drawn on the map have less meaning. Not to say they’re not important — we could have regulatory requirements or data storage requirements that differ by geography in some meaningful way — but “concentration” takes a different meaning, and we need to pay close attention to the networks within a company.

2. Network Risk From an External Perspective

In the cyber insurance line, we need to pay attention to the networks external to an insured company. It’s well documented that Target’s data breach was conducted through an HVAC system. By examining Target’s internal systems alone, no one would have noticed the vulnerability that was exploited.

As underwriters and actuaries, we need to be well aware of the links from one company to another. Which companies does an insured do business with or contract work from? Just as we mentioned above with apps and devices that are linked, the network we are worried about is only as strong as the weakest link. Another example of this is the recent attacks on a Bangladeshi bank. Attackers were able to navigate through the SWIFT system by breaching a weaker-than-average security perimeter and carrying out attacks spanning multiple banks sharing the same financial network.

3. Significance of the Human Element

Another consideration and difference from the way we traditionally price is the addition of the human element. While human error has long been a part of other lines of business, we have rarely considered the impact of an active adversary on insurance prices. The one exception to this would be terrorism insurance, but mitigation of that risk has been largely assisted by TRIA/TRIPRA.

However, whenever we fix a problem simply by imposing limits, we aren’t really solving the larger problem. We are just shifting liability from one group to another; in this case, the liability is being shifted to the government. While we can take a similar approach with cyber insurance, that would mean ultimately shifting the responsibility from the insurers to the reinsurers or just back to the insureds themselves. The value of this, to society, is debatable.

See also: Cyber Insurance: Coming of Age in ’17?  

A predictive model becomes quite complex when you consider the different types of potential attackers, their capabilities and their motivations. It’s a constant game of cat and mouse, where black hat and white hat hackers are racing against each other. The problem here is that insurers and actuaries are typically neither white hat nor black hat hackers and don’t have the necessary cyber expertise to confidently predict loss propensity.

4. Correlation of Attacks

In attempting to model the “randomness” of attacks, it is important to think about how cyber attacks are publicized or reported in the news, about the reactions to those attacks and the implications on future attacks. In other words, we now have the issue of correlation across a number of factors. If Company A is breached by Person B, we have to ask ourselves a few questions. Will Company A be breached by Person C? Will Person B breach another company similar to or different from Company A? Will Person D steal Person B’s algorithm and use it on entirely different entity (after all, we’ve seen similar surge attacks within families such as ransomware)? If you as the reader know the answers to these questions, please email me after reading this paper.

5. Actuarial Paradox

We also have to consider the implications on the security posture of the affected entity itself. Does the attack make the perimeter of the affected company weaker, therefore creating additional vulnerability to future attacks? Or, alternatively, does the affected company enact a very strong counterpunch that makes it less prone to being breached or attacked in the future? If so, this poses an interesting actuarial dilemma.

Specifically, if a company gets breached, and that company has a very strong counterpunch, can we potentially say that a breached company is a better risk going forward? Then, the even-more-direct question, which will surely face resistance, is: Can we charge a lower actuarial premium for companies that have been breached in the past, knowing that their response to past events has actually made them safer risks? This flies directly in the face of everything we’ve done within other lines of business, but it could make intuitive sense depending on incident response efforts put forth by the company in the event of breach or attack.

6. Definition of a Cyber Catastrophe

Even something as simple as the definition of a catastrophe is in play. Within some other lines of insurance business, we’re used to thinking about an aggregate industry dollar threshold that helps determine whether an incident is categorized as a catastrophe. Within cyber, that may not work well. For instance, consider an attack on a single entity that provides a service for many other entities. It’s possible that, in the event of a breach, all of the liability falls on that single affected entity. The global economic impact as it pertains to dollars could be astronomical, but it’s not truly an aggregation event that we need to concern ourselves with from a catastrophe modeling perspective, particularly because policy limits will come into play in this scenario.

We need to focus on those events that affect multiple companies at the same time and, therefore, provide potential aggregation risk across the set of insureds in a given insurance company’s portfolio. This is, ultimately, the most complicated issue we’re trying to solve. Tying together a few of the related challenges: How are the risks in our portfolio connected with each other, now that we can’t purely rely on geography? Having analytical tools available to help diagnose these correlations and the potential impacts of different types of cyber attacks will dramatically help insurers write cyber insurance effectively and confidently, while capturing the human element aspect of the threats posed.

7. Dynamic Technology Evolution

If we can be certain of one thing, it’s that technology will not stop changing. How will modelers keep up with such a dynamic line of business? The specific threats posed change each year, forcing us to ask ourselves whether annual policies even work or how frequently we can update model estimates without annoying insurers. Just as we would write an endorsement in personal auto insurance for a new driver, should we modify premium mid-term to reflect a newly discovered specific risk to an insured? Or should we have shorter policy terms? The dynamic nature of this line forces us to rethink some of the most basic elements that we’ve gotten used to over the years.

8. Silent Coverage

Still, all of the above considerations only help answer the question of what the overall economic impact will be. We also need to consider how insurance terms and conditions, as well as exclusions, apply to inform the total insurable cost by different lines of insurance. Certain types of events are more insurable, some less. We have to consider how waivers of liability will be interpreted judicially, as well as the interplay of multiple lines of business.

It’s safe to assume that insurance policy language written decades ago did not place much emphasis on cyber exposure arising from a given product. In many cases, silent coverage of these types of perils was potentially entirely accidental. Still, insurers are coming to grips with the fact that this is an ever-increasing peril that needs to be specifically addressed and that there exists significant overlap across multiple lines of business. Exclusions or specific policy language can, in some cases, be a bit sloppy, leading to confusion regarding which product a given attack may actually be covered within. This becomes the last, but not least, problem we have to answer.


The emerging trends in cyber insurance raise a number of unique challenges and have forced us to reconsider how we think about underwriting, pricing and aggregation risk. No longer we can pinpoint our insureds on a map and know how an incident will affect the book of business. We need to think about both internal and external connections to an insured entity and about the correlations that exist between event types, threat actors and attack victims. In cases when an entity is attacked, we need to pay particular attention to the response and counterpunch.

As the cyber insurance market continues to grow, we will be better able to determine whether loss dollars tend to fall neatly within an increasing number of standalone cyber offerings or whether insurers will push these cyber coverages into existing lines of business such as general liability, directors and officers, workers’ compensation or other lines.

Actuaries and underwriters will need to overcome the lack of quality historical data by pairing the claims data that does exist with predictive product telemetry data and expert insight spanning insurance, cyber security and industry. Over time, this effort may be assisted as legislation or widely accepted model schema move us toward a world with standardized language and coverage options. Nonetheless, the dynamic nature of the risk with new adversaries, technologies and attack vectors emerging on a regular basis will require monitored approaches.

See also: Another Reason to Consider Cyber Insurance  

In addition, those that create new technology need to realize the importance of security in the rush to get new products to market. White hat hackers will have to work diligently to outpace black hat hackers, while actuaries will use this insight to maintain up-to-date threat actor models with a need for speed unlike any seen before by the traditional insurance market.

Some of these challenges may prove easier than they appear on paper, while some may prove far more complicated. We know actuaries are good problem solvers, but this test will be a serious and very important one that needs to be solved in partnership with individuals from cyber security and insurance industries.

Where is Real Home for Analytics?

One of the fascinating aspects of technology consulting is having the opportunity to see how different organizations address the same issues. These days, analytics is a superb example. Even though every organization needs analytics, they are not all coming to the same conclusions about where “Analytics Central” lies within the company’s structure. In some carriers, marketing picked up the baton first. In others, actuaries have naturally been involved and still are. In a few cases, data science started in IT, with data managers and analytical types offering their services to the company as an internal partner, modeled after most other IT services.

In several situations that we’ve seen, there is no Analytics Central at all. A decentralized view of analytics has grown up in the void – so that every area needing analytics fends for itself. There are a host of reasons this becomes impractical, so often we find these organizations seeking assistance in developing an enterprise plan for data and analytics. This plan accounts for more than just technology modernization and nearly always requires some fresh sketches on the org chart.

Whichever situation may represent the analytics picture in your company, it’s important to note that no matter where analytics begins or where it currently resides, that location isn’t always where it is going to end up.

Ten years ago, if you had asked any senior executive where data analytics would reside within the organization, he or she would likely have said, “actuarial.” Actuaries are, after all, the original insurance analytics experts and providers. Operational reporting, statistical modeling, mortality on the life side and pricing and loss development on the P&C side – all of these functions are the lifeblood that keep insurers profitable with the proper level of risk and the correct assumptions for new business. Why wouldn’t actuaries also be the ones to carry the new data analytics forward with the right assumptions and the proper use of data?

Yet, when I was invited to speak at a big data and analytics conference with more than 100 insurance executives and interested parties recently, there was not one actuary in attendance. I don’t know why — maybe because it was quarter-end — but I can only assume that, even though actuaries may want to be involved, their day jobs get in the way. Quarterly reserve reviews, important loss development analysis and price adequacy studies can already consume more time than actuaries have. In many organizations, the actuarial teams are stretched so thin they simply don’t have the bandwidth to participate in modeling efforts with unclear benefits.

Then there is marketing. One could argue that marketing has the most to gain from housing the new corps of data scientists. If one looks at analytics from an organizational/financial perspective, marketing ROI could be the fuel for funding the new tools and resources that will grow top-line premium. Marketing also makes sense from a cultural perspective. It is the one area of the insurance organization that is already used to blending the creative with the analytical, understanding the value of testing methods and messages and even the ancillary need to provide feedback visually.

The list of possibilities can go on and on. One could make a case for placing analytics in the business, keeping it under IT, employing an out-of-house partner solution, etc. There are many good reasons for all of these, but I suspect that most analytics functions will end up in a structure all their own. That’s where we’ll begin “Where is the Real Home for Analytics, Part II” in two weeks.

How Risk Management Drives up Profits

Diane Meyers, director of corporate insurance for YRC Worldwide, manages the insurance and associated risks of one of the most hazard-prone industries in the world – trucking. YRC is the largest long-haul trucking company in U.S., operating in all 50 states and Canada. It has 14,500 tractors and 46,500 trailers and ships 70% of all transported cargo throughout the U.S. each year. YRC’s origins trace back to 1924 to the Akron, Ohio-based company Yellow Cab Transit before the independent trucking companies of Yellow, Roadway, Reimer and others were combined in 2009 into the YRC banner.

I asked Diane about her biggest challenges in managing the risks associated with the YRC fleet, including 32,000-plus employees (a number that has grown in busy times to more than 50,000) and 400 physical locations. She said her top three hot buttons are: collateral, collateral and collateral.

For anyone familiar with high-deductible or self-insured workers’ comp programs, insurers and state governments rely on a company’s posted collateral (aka security deposit) as the financial backstop should the company go bankrupt or default in its obligations. Companies with high-risk jobs can experience workers’ comp costs that can easily be 400% to 500% greater than white collar jobs. Posted collateral needs to cover the costs expected to be associated with the life of each claim and can be a huge drain for any company, including YRC.

Diane, who reports to the treasurer, says YRC negotiates collateral requirements with one excess workers’ comp insurer for its high-deductible program in 24 states. Collateral is typically posted using LOCs (letters of credit) or surety bonds. YRC’s self-insured program in the remaining 26 states means meeting the collateral demands of their 26 separate governing entities.

Meeting with the YRC’s carrier’s actuary along with her own actuary every three months, Diane also has to deal with each state at least annually. “Working with multiple sets of actuaries is a whole other challenge, since I have to educate them on the realities of our own workers’ comp program and its achievements, like return-to-work,” she says. “Besides that, in working with actuaries, I have to speak their language and understand how they work their crystal ball.”

Diane added: “These are monies that are tied up for decades to come that cannot otherwise be used for our company’s operations. I have to find ways to save the company from the ever-changing collateralization demands through ongoing, complex negotiations with insurers and regulators. Safety and loss control programs have to demonstrate traction and real savings to our workers’ comp and liability exposures.” Diane noted that safety is so important that each YRC operating division has its own safety department.

As with most large companies, YRC is self-insured for most of its liability risks. To assist Diane with vehicle and general liability claims, YRC uses its own, as well as outsourced, legal counsel to manage risks up to its retention level. There are also a myriad of state and federal rules and regulations regarding long-haul trucking that require strict adherence and attention to changes.

When asked about her unique challenges at YRC, Diane said, “I have to understand the legal demands and expectations of all 50 states, Canada, and D.C.”

She also faces the complexity of working with a corporation that has grown through acquisitions of older companies. To find key claim-related data, she says, “I have had to go through various insurance policies and records of the companies we acquired going back as far as the ’60s!”

With the ever-changing demands for long-haul transportation by various industries, YRC experiences significant fluctuations in its workforce. There have been times when the workforce has expanded more than 50%, and, during recessions, there have been significant reductions. A swing either way can create huge risk management challenges, especially when there are continuing workers’ comp claims to deal with. This is made even tougher because most of YRC’s employees are in the Teamsters union, and some issues could require collective bargaining or at least close communication and cooperation between labor and management.