Standalone cyber insurance can successfully address a subset of privacy and security costs related to personally identifiable information, personal health information, payment card industry losses and increasingly some business interruption. However, outside of four industries (retail, hospitality, healthcare and financial institutions) generally no single insurance policy adequately covers cyber perils that result in funds transfers/crypto losses, bodily injury or tangible property damage-type losses. Organizations of all sizes, geographies and industries increasing rely on data analytics and technology, such as cloud computing, Internet of Things and artificial intelligence. These advancements add new and unique cyber exposures. Modeling of worst-case cyber scenarios compared with a review of the scope and exclusions of the base forms of multiple lines of insurance reveals potential material gaps in cyber coverage.
The number of cyber incidents with losses greater than $1 million (through early September 2018)
Recognize Financial Statement Impact
According to the Risk and Insurance Management Society, organizations’ total cost of risk declined for the fourth year in a row in 2017, but cyber costs moved in the opposite direction, rising 33%. Most boards of directors and management now include cyber perils and solutions in corporate governance discussions as they learn more regarding the potential financial statement impact of high-profile cyber incidents. Yet, organizations only insure a relatively small portion of their intangible assets compared with insurance coverage for legacy tangible assets.
Prudent organizations will spend the appropriate amount of time and resources on the risk management areas that are likely to have the greatest return on investment. For example, a disproportionate amount of attention is focused on cryptocurrency exposures, which affects a relatively small proportion of the corporate insurance buying population and related monetary losses. These are generally excluded from standalone cyber insurance policies.
Almost every large organization and most middle-size organizations will have some reliance on distributed ledger technology within the next few years – either directly or via one of their third-party suppliers, distributors, vendors, partners or customers. It is important for organizations to educate and prepare themselves:
1. Understand the intended scope of standalone cyber and professional liability insurance policies
Typical standalone cyber insurance policies specifically exclude funds transfers, crypto transfers and other cash and securities monetary losses. Crime policies are intended to address fund losses under specified circumstances. Similarly, payment diversion fraud coverage for “spoofing,” “phishing” and other social engineering incidents is generally excluded under cyber policies but possibly covered under crime policies.
However, two federal appellate courts recently ruled that policyholders are entitled to crime insurance coverage for losses arising from social engineering schemes.
July 2018: Facebook investors filed two different securities lawsuits: (1) the first based on the Cambridge Analytica user data incident; and (2) the second following Facebook’s lower-than-expected quarterly earnings release due to lower growth rate caused in part by allegedly unanticipated expenses and difficulties in complying with the European Union General Data Protection Regulation (“GDPR”).
Aug. 8, 2018: Securities class action litigation against a publicly reporting media performance ratings company disclosed in its quarterly earnings release that GDPR-related changes affected the company’s growth rate, pressured the company’s partners and clients and disrupted the company’s advertising “ecosystem.”
Typical professional liability and cyber policies also specifically exclude shareholder derivative securities and similar fiduciary liability litigation. A well-crafted directors and officers insurance policy is recommended to provide certain defense and indemnity coverage for such claims.
Absent extensive policy wording customization, the typical cyber insurance policy specifically excludes all bodily injuries and tangible property damage – both first-party tangible property damage (the insured’s own property) and third-party tangible property damage (property owned by someone other than the insured).
2. Silent and affirmative cyber coverage under other lines of insurance
When cyber exposure losses first emerged, insurers had not priced cyber risks into their broadly worded legacy policies, such as property and general liability. However, absent specific cyber exclusions, such as the CL 380 Cyber Exclusion, it is possible that legacy property, general liability, environmental, product recall, marine and aviation could inadvertently cover unintended cyber perils, thus the so-called silent cyber insurance coverage.
After making the first unintended cyber claims payment, some insurers, but not yet all, either exclude or sub-limit cyber risk from new standard policies and renewals. Granting affirmative full cyber limits coverage for an additional premium in such legacy policies is rare and slow to develop. Silent cyber coverage remains. In fact, according to multiple large insurance companies, the 2017 total amount of cyber-related business interruption claims payments were greater under property insurance policies than under standalone cyber policies.
Furthermore, aggregated/correlated/systemic cyber exposures have the potential to cause damages that are multiples of any loss seen to date (i.e. 10,000 customers of a cloud provider or energy/power/utilities). Catastrophe modeling for aggregated/correlated/systemic cyber risk is in its infancy. Innovative approaches for assisting insurers concerned about aggregated, clash incidents – or two different policies covering the same cyber peril – and silent cyber exposures are starting to emerge.
To achieve cyber resiliency, consider cyber as a peril rather than as a standalone insurance policy. Assess, test, improve, quantify, transfer and respond to the larger cyber risk management issues based on a cost-benefit analysis of resource allocation. Insurance is complementary to a robust cyber resiliency risk management approach. Each organization should identify and protect its critical intangible assets and balance sheet by aligning the cyber enterprise risk management strategy with corporate culture and risk tolerance.
All descriptions, summaries or highlights of coverage are for general informational purposes only and do not amend, alter or modify the actual terms or conditions of any insurance policy. Coverage is governed only by the terms and conditions of the relevant policy. If you have any questions about your specific coverage or are interested in obtaining coverage, please contact your Aon broker. For general questions about cyber insurance, contact: Stephanie Snyder at email@example.com.
Over the last 10 years of the “risk leader” portion of my career, as the head of enterprise risk management at USAA (2001-10), as well as during my subsequent work as an ERM consultant, I was challenged by several questions that affect risk management results and, by extension, ultimate success. All fell under the header of “risk management maturity,” and focusing on it can provide huge benefits to you and to your organization.
To start, we need to get two things straight. First, how are you defining “risk,” and have you driven a consensus among key stakeholders about that definition? Second, which risks are you going to manage, and where on the loss curve do they fall?
These questions may sound simple, but the reality is that many risk leaders have responsibilities for only a portion of the risks that organizations face — often, only the insurable risks. If that’s the case, you have your answer to both questions nailed.
If, on the other hand, you are a risk leader with broader accountability for more or all risks (via enterprise risk management, or ERM) that could affect an organization (both negatively and positively), then the first question — “how does your firm define risk?” — requires clear definition. The most commonly accepted definition of risk is “uncertainty.” I like this simple definition, and it captures the most central element of concern. However, the real challenge remains the question about the level of uncertainty (aka frequency/likelihood). To many, even more important is the level of impact or severity. My favorite chart to help illustrate this concept is one where the “tail” of the loss distribution represents where the proverbial “black swans” live.
A typical loss curve has as its peak the expected level of loss, and the black swan sits out on the tail of this curve, where the x-axis is impact of severity of loss and the y-axis is the frequency or likelihood of loss. While many hazard-focused leaders put their attention on risks at expected level or to the left along the x-axis where certainty of loss rises, the challenge is where in this region of the curve to the right should one be managing? While the possibility of loss becomes increasingly remote as you move out toward the tail of the curve, the impact of events become more destructive. Key questions that must be answered include:
Do we care more about likelihood or impact, or are they equal?
What level of investigation do we apply to risks that are remotely likely?
How do we apply limited resources to risks that are remotely likely?
Do we have a consensus among key stakeholders as to what risks we should focus on and how?
Do have or need a process to manage emerging risks?
Do we have a consensus on and clear understanding of how we define risk in our organization?
These issues are the starting point to the risk management maturity question, which, if handled well, facilitates organizational success. From these answers, you can chart your course for your firm. The answers will define the process elements of maturity. But we need to define what risk maturity is to track progress toward it and to ensure that stakeholders are aligned around the chosen components.
The various components among the numerous risk maturity models tend to overlap considerably. Here’s one generic set of attributes of maturity:
Risk is managed to specifically defined appetite and tolerances
There is management support for the defined risk culture and direct ties to the corporate culture
A disciplined risk process is aligned with other functional areas
There is a process for uncovering the unknown or poorly understood risks
Risk is effectively analyzed and measured both quantitatively and qualitatively
There is collaboration on a resilient and sustainable enterprise
The first, and I think most thoroughly developed, model comes from the Risk and Insurance Management Society (RIMS). It was developed some 10 years ago or so but remains in my opinion a simple yet comprehensive view of the seven most important factors that inform risk maturity and that, when well implemented, should drive an effective approach to managing any risk within your purview.
The components of the RIMS model include a focus on:
The degree to which an enterprise-wide approach is supported by executive management and is aligned with other relevant functions
The degree to which repeatable and scalable process is integrated in the business and culture
The degree of accountability for managing risk to a detailed appetite and tolerance strategy
The degree of discipline applied to using the elements of good root-cause analysis
The degree to which a robust emerging risk process is used to uncover uncertainties to achieving goals
The degree to which the vision and strategy are executed considering risk and risk management
The degree to which resiliency and sustainability are integrated between operational planning and risk process
As with all risk management strategies (no two of which that I’ve seen are exactly the same), there is no one way to accomplish maturity. Every risk leader needs to do for her organization what the organization needs and will support.
Another maturity model that is worthy of note is the Aon model. Like RIMS’ model, it enables multiple levels of maturity and methodology for charting progress toward an ideal state. Characteristics of the Aon model include:
Ensuring the board understands and is committed to the risk strategy
Establishing effective risk communications
Emphasizing the ties among culture, engagement and accountability
Having stakeholder participation in risk management activities
Using risk information for decision making
This is not to say that the RIMS model ignores these issues. There is simply a different emphasis.
Also noteworthy is Protiviti’s perspective on the board of directors’ accountability for risk oversight. A few highlights include:
An emphasis on the risks that matter most
Alignment between policies and processes
Effective education and use of people and their place in the organization
Assumptions that are supportable and understood
The board’s knowledge of the right questions to ask
Focus on understanding the relationship to capability maturity frameworks
Certainly, the good governance of organizations is critical, and the board’s role is paramount. If the board is engaged and accountable for ensuring that its risk oversight is effective, the strategy is likely to be executed successfully and, by inference, risk will have been effectively managed, as well.
To complete the foundation for the business case for using a risk maturity model to track progress, consider these key points:
There is no one right approach; each organization must chart its own course aligned with its culture and priorities
Risk must be treated as an integral aspect of strategy
There must be a focus on additive value, as with all corporate processes
Risk maturity has produced documented valuation premium for studied users
With the effective use of risk maturity models, you should be able to better chart your risk evolution journey, and how a good maturity strategy related to corporate strategy and priorities is the ultimate nexus for success. Risk and risk management should drive performance results and what remains to be done to achieve longer-term aspirations. This approach to managing your risk strategy should allow you to:
Translate the component of risk maturity into a successful ERM journey
Refer to ERM results and impacts achieved by others to buttress your efforts
Understand key tactics to exploit and pitfalls to avoid as you perfect your risk management strategy.
Using a risk maturity model will, if nothing else, provide the guard-rails and discipline that may otherwise be missing from your current attempts to make a difference in the success of your enterprise.
A large retailer gets hacked, and customer data is taken, which costs millions in expense and lost revenues. A product recall is perceived to be badly handled, which tarnishes a manufacturer’s reputation and seriously erodes revenue, as well as margins. An acquisition fails to produce the expected profit lift and hurts a technology company’s share price. These organizations have implemented ERM, and, clearly, ERM has failed. Or has it?
Let’s look at three criticisms of ERM:
ERM Cannot Identify and Protect Against All Significant Uncertainties
This criticism is fair in the most literal sense only. Even a very robust and well-administered ERM process cannot find every major risk that an organization is subject to, nor can it protect against all risks, whether identified or not. However, without ERM, the ability to identify a majority of significant uncertainties facing an organization is greatly diminished. Not only that, without an ERM approach to risk, the mitigation of known risks is more likely to be addressed silo by silo even when an enterprise-wide solution is necessary.
In addition, with ERM, organizations are generally better prepared to rebound from unexpected, unidentified risks that do hit them. For example, ERM organizations typically have very robust business continuity and business recovery plans, have done tabletop exercises or drills that simulate a crisis and have maintained a lessons-learned and special expertise file that can be called upon, as needed.
According to a post by Carrier Management, citing RIMS, “A whopping 77% of risk management professionals credit enterprise risk management with helping them spot cyber risks at their companies.”
These survey results do not suggest that chief risk officers or risk managers, who are responsible for the ERM process, are cyber experts or that all cyber risks can be specifically ascertained. Rather, the survey suggests that ERM better positions a company to discover cyber risks, just as it does with other categories of risk.
If ERM can reduce business uncertainties and surprises by identifying risks and managing them better than other forms of risk management, despite not being able to do so 100% of the time, it has not failed. In fact, it has most probably added great value. Consider a CEO who can avoid even one unnecessary sinking feeling when realizing that a risk that should have been spotted and dealt with has hit the company. How much is it worth to that CEO to prevent that feeling?
ERM Focuses on the Negative Rather Than the Positive
This criticism is not fair in any sense. It requires an upside-down view of ERM. Think about it. In almost any definition of ERM, there is some sort of statement as to the purpose or mission of ERM. The purpose is to better ensure that the organization achieves its strategy and objectives. What could be more positive?
By dealing with risks that challenge the ability of the organization to meet its targets, ERM is fulfilling an affirmative and important task. That most risks pose a threat is not disputed. But by removing, avoiding, transferring or lessening threats, organizations have a better chance of succeeding.
This is not the only positive result that can emanate from ERM’s handling of risk. Often, a thorough examination of a risk will result in opportunities being uncovered. The opportunity could take the form of innovating a product or entering a new market or creating a more efficient workflow.
Consider a manufacturer that builds a more ergonomic chair because it has identified a heightened risk of lawsuits arising from some new medical diagnoses of injuries caused by a certain seat design. Or, consider an amusement park that is plagued by its patrons throwing ticket stubs and paper maps on the ground, thereby creating a hazard when wet or covering dangerous holes or obstacles. Imagine that the company decides to reduce the risk by increasing debris pick-up and offering rewards to patrons for turning in paper to central depositories, then turns it into “clean” confetti sold to a party goods manufacturers.
These are hypothetical examples, but real-life examples do exist. Some are quite similar to these. Many risk managers, unfortunately, are reticent to share their success stories in turning risk into a reward. For that matter, many are reluctant to share their successes of any kind. One could speculate why this is so. It may be as simple as not wanting to tempt the gods of chance.
ERM Is Too Expensive
Those who criticize ERM for being too expensive to implement may lack information or perspective. Consider the following questions:
Has ERM been in place long enough to produce results?
Has the organization started to measure the value of ERM (there are ways to measure it)?
Can an organization place a dollar value on avoiding a strategic risk or a loss that does not happen; does it need to?
Has the number of surprises diminished?
Are there successes along with failures?
How much is it worth to enhance the company’s reputation because it is seen as a responsible, less volatile company because of ERM?
How efficiently has the ERM process been implemented?
Is too much time being spent on selling the concept rather than implementing the concept?
Has the process and reporting of ERM results been kept clear and simple?
To answer the criticism of a too expensive process, the following are things that a company can do to make sure the process is cost-effective:
Embed the process, as far as feasible, into existing business processes, e.g. review strategic risk during strategic planning, hold ERM committee meetings as part of or right after other routine management meetings, monitor ERM progress during normal performance management reviews, etc.
Assign liaisons to ERM in the various business units and functional departments who have other roles that complement risk management.
Do not try to boil the ocean; keep the ERM process focused on the most significant risks the company faces.
Measure the value that ERM brings, such as reduction in suits or lower total cost of risk or whatever measures are decided upon by management.
In the author’s purview of ERM in various organizations, the function tends to be kept very lean (without diminution of its efficacy). If the above suggestions are adopted, along with other economical actions, the costs associated with the process can be kept in balance with the value or well below the value.
It is possible for an ERM process to be poorly executed, and thus deserve criticism. It is also possible for an ERM process to be well-executed and deserve nothing more than continuous improvement.
The caution is that no one should expect perfection or suppose that one unanticipated risk that creates a loss denotes a total failure of this enterprise-wide process. Organizations are sometimes faced with situations that are beyond a reasonable expectation of being known or managed.
It would be fair to lodge criticism of ERM under certain circumstances; for example, if an organization’s ERM process did not reveal a risk that all its competitors recognized as a risk and addressed. But even in that case, perhaps there were reasons to think the risk would not penetrate protections the organization already had in place. Suffice it to say, every process and situation must be evaluated on its own merits and within the proper context.
Data-driven analysis is a critical decision-making tool for Construction Financial Managers and other industry leaders.
Decision-making is arguably the most important responsibility of company leadership.
Companies that make better decisions make fewer mistakes, and achieve a distinct competitive advantage in the marketplace.
The underlying purpose of benchmarking is to continually improve the quality of organizational decision-making.
As construction risk management consultants, we help contractors prevent accidents, mitigate claims, and reduce the total cost of risk through a continuous improvement process.
We believe companies must instill management accountability for continuous improvement by linking performance measurement to both prevention activities (leading indicators) and operational results (lagging indicators). As the adage goes:
“What gets measured is what gets done.”
In our consulting roles, we frequently help companies establish realistic performance measures by conducting various types of claim and loss analysis.
This type of data analysis is usually the starting point in a performance improvement process — and a common practice among insurance agencies, brokerages, carriers, and risk management consulting firms.
In addition, we are often asked to conduct a benchmarking analysis that compares one company's claim and loss data against peer companies or to the construction industry as a whole.
The term “benchmarking” refers to the comparison of a company's performance results against those of similar peer companies. Benchmarking evolved out of the quality improvement movement in the late 1980s and early 1990s.
Its initial intent was to identify leading companies regardless of industry sector, and apply their best practices to improve one's own company. Over time, benchmarking has become synonymous with process improvement.
The traditional view of benchmarking required two separate disciplines focused on performance improvement: measures and methods. Identifying and capturing performance indicators (the measures) is only the first step; developing and implementing performance improvement (the methods) is the second and most important step for the benchmarking process to be truly effective.
The Health Club Analogy
There is limited value in benchmarking without applying new methods to address continuous performance improvement. Performance improvement requires more than the measurement of performance indicators; it requires the implementation of changes in management disciplines to attain improved operational results.
Using only performance indicators without implementing new methods to improve operations is akin to joining a health club and expecting the benefits without actually using the equipment or committing to an exercise program.
Merely jumping on the scale and gauging your weight relative to others doesn't help you achieve your own weight loss goals anymore than comparing your pulse and respiration rate to others helps you attain your aerobic or cardiovascular fitness goals. What matters most is that a person embarking on a weight loss or fitness program stays committed to the process and monitors his or her own progress.
Similarly, we believe the ongoing monitoring of claim and loss data specific to an individual company is even more important than the initial measurement of insurance claim and loss data relative to other companies.
Baselining As Benchmarking
The term “baselining” refers to the internal benchmarking process that occurs when a company compares its performance against its own results year after year. Ongoing, internal monitoring allows a contractor to determine if the company's claim and loss trends are improving or deteriorating, and to make the critical performance improvement decisions necessary to facilitate a change in results.
Referring back to the health club analogy, baselining does not compare an individual's weight and aerobic fitness to that of the other health club members. Instead, individual fitness goals and measures are established, monitored, and tracked to verify continuous personal improvement.
Similarly, a construction company can develop a baseline analysis of its loss cost performance by reviewing loss and claim data for a minimum of 3-5 years. Company results are compared from year to year, and ideally are broken down by operating entity, division, project, manager, or even crew levels.
Exhibit 1 provides a sample of a baseline analysis that compares one company's relative claim and loss performance within all of its operating divisions.
This analysis reviews the historical loss cost data for the entire company and breaks it down into meaningful data relative to each operating division. The total workers' comp, Comprehensive General Liability, and auto liability incurred claim costs (sum of paid and reserves) for each company division over a five-year period were compared to the total man-hours for each division, producing a cost per man-hour figure.
The results illustrate dramatic differences in total claim costs per man-hour for each division. This baseline analysis was the first step in raising awareness of the predominant loss leaders within the company. This increased awareness led to a detailed analysis that established plans of action and realistic cost targets by company division for the upcoming year.
We acknowledge that there are numerous benefits to measuring the frequency, type, and cost of insurance claims compared to peer groups and/or the entire construction industry. Such analyses provide the ability to:
Identify leading types and sources of claims
Establish strategic objectives to prevent the occurrence of common industry claims
Create awareness among managers and employees about the costs of claims and the impact on profitability
Post positive results on company websites and for use in other marketing materials
The Bureau of Labor Statistics provides safety-related data so that companies can externally benchmark injury and illness data against specific industry groups. (Check out the Web Resources section at the end of this article for more information.)
In addition, Bureau of Labor Statistics data is used to calculate and compare OSHA Recordable Incident Rates and Lost Workday Incident Rates, both of which are common construction industry benchmarks. This data is useful when making high-level comparisons within construction industry segments relative to injury and illness rates.
We also use external benchmarking analyses to establish risk reduction, loss prevention, or cost containment goals. In “Risk Performance Metrics” by Calvin E. Beyer in the September/October 2007 issue of Building Profits, a sample benchmarking comparison shows a representative contractor's duration of lost workdays workers' comp cases in median number of days compared against the median duration for the industry. Results such as these can highlight the importance of an increased focus on injury management and return-to-work programs.
The benchmarking analysis in Exhibits 2A and 2B compares a contractor's workers' comp claim and loss performance to an established group of peer contractors in the same specialty trade. (These companies engaged in similar work, and performed in states with similar insurance laws and legal climates.)
The analysis was based on total incurred workers' comp costs and total number of workers' comp claims as compared to payroll for each entity. Overall, Company D had worse results than the other three companies.
This prompted an in-depth review of Company D's workers' comp losses by division and occupation. As shown in Exhibit 3, the company experienced significant claim frequency and severity issues within the first six months of employment.
These findings triggered the development and implementation of specific activities designed for Company D's new employees.
Below are some of the activities that were incorporated into the formal improvement plan:
new hire skills assessments
daily planning meetings
Other Sources Of Benchmarking Data
Professional associations and industry trade/peer groups also provide comparative data for benchmarking purposes.
The Construction Financial Management Association's Construction Industry Annual Financial Survey is an excellent source for understanding the key drivers of contractor profitability. We use the survey data to determine comparative profit margins for different types and classes of contractors when we calculate a revenue replacement analysis to show the additional sales volume needed to offset the cost of insurance claims. (This technique was highlighted in the “Risk Performance Metrics” article previously mentioned.)
Similarly, the Risk and Insurance Management Society (RIMS) conducts an annual benchmarking survey that reviews insurance rates, program coverages, and measures of total cost of risk.
An example of a peer group data source for benchmarking is the Construction Industry Institute (CII). The Construction Industry Institute is a voluntary “consortium of more than 100 leading owner, engineering-contractor, and supplier firms from both the public and private arenas” (www.construction-institute.org). It develops industry best practices and maintains a benchmarking and metrics database for its participating members.
Another peer group example involves members of captive insurance companies sharing and comparing claim and loss data for the group as a whole. There is a major advantage when a true peer group shares benchmarking data: Such data sharing often leads to peer pressure in the form of increased ownership and accountability for improvement by the companies shown to be the poorest performing members.
We continue to search for more new sources of industry best practices and comparator data. A possible emerging source for the construction industry is the National Business Group on Health. This organization has developed standardized metrics known as Employer Measures of Productivity, Absence and Quality™ (EMPAQ®).
EMPAQ® helps member companies gauge the effectiveness of their injury and absence management and return-to-work programs. The founder and principal of HDM Solutions, Maria Henderson, served as a project sponsor for EMPAQ® from 2003-2007, and co-presented with Calvin E. Beyer on “Return to Work as a Workforce Development Strategy” at CFMA's 2008 Annual Conference & Exhibition in Orlando, Florida.
Limitations Of External Benchmarking
We fear that the increasing popularity of external benchmarking analyses may indicate that it has become a “quick fix” solution or a management fad. When asked to conduct an external benchmarking analysis, we always ask the following questions:
What is your purpose in seeking these comparisons with other companies?
Who are you trying to convince and what are you trying to convince them to do?
What specific peer companies should be used for comparative purposes?
Are these companies (and their operations and exposures) truly similar enough for a fair comparison?
Beware Of Pitfalls
There are many hurdles to surmount in locating suitable companies for external benchmarking comparisons. Generally, when benchmarking comparisons can be made, more often than not the greatest value lies in the workers' comp line of insurance coverage.
Here are some key factors to consider when choosing contractors for external benchmarking comparisons:
Percent of self-performed work vs. subcontracted work
Payroll class codes and hazard groupings of selfperformed work
Differential geographic labor wage rates
Payroll rate variances between union and merit shop operations
Size of insurance deductibles
Claim reporting practices
For example, claim reporting practices must be similar in order to minimize distorting the frequency or average cost of a claim. If one or more comparison companies self-administers minor claims or does not report all claims to their carrier, using carrier loss reports for the comparison is an invalid method.
We also find that comparing the frequency of claims and total loss dollars divided by thousands or millions of dollars of payroll (exposure basis) is a helpful workers' comp benchmark between companies of similar operations in similar states.
Likewise, a suitable benchmark for auto liability performance compares the frequency of claims and total loss dollars per one hundred vehicles.
When benchmarking fleet-related claims, ensure that the number and size of fleet vehicles — as well as the type of driving (urban vs. rural) and the total number of miles driven annually — are similar among the contractors whose claims are being compared.
Benchmarking comparisons of Comprehensive General Liability insurance results are especially challenging due to delays in reporting third-party bodily injury and property damage claims, in addition to the expected long tail of loss development for these claims.
All of these factors are compounded by vastly different litigation trends and liability settlements in various states and regions of the country.
Common Limitations Of Data Sources
Whether or not you intend to develop a baseline of your company's claim data or to benchmark your company's performance against a peer company, there are several issues that must be successfully resolved regarding the data's quality and integrity.
Based on our experience, we classify the key challenges associated with exposure and claim/loss data into the categories shown in Exhibit 4: availability, accuracy, accessibility, standardization, reliability, comparability, and date-related problems.
Value Of Multiple Measures
Evaluating data from various sources and different angles is also valuable. Why? Because it's possible to gain a better understanding of the whole by dissecting the parts. This practice illustrates the principle of multiple measures.
This approach is substantiated by 2006 research, which concluded that the “simultaneous consideration” of frequency and severity provides a more comprehensive result than performing analysis based solely on one factor.1
This is similar to our approach when we conduct a “Claim to Exposure Analysis” and review historical frequency and severity vs. the relative bases of exposure for each line of casualty insurance coverage.
Returning to the health club analogy, when starting a formal exercise program, you often begin with such general baseline measurements as height and weight; this is usually followed by additional measurements, such as BMI, body fat content, and the girth of arms, legs, and chest (the baseline).
As we all know, weight alone is not always the best indicator of success in fitness efforts. In fact, since muscle weighs more than fat, an increase in total body weight may actually occur after beginning and maintaining a fitness program.
Although you might not experience a dramatic weight drop, you could see a reduction in waist size and BMI — positive changes that would not be evident unless multiple measures were being used and reviewed.
Benchmarking insurance claim and loss data performance is like comparing one person's height and weight against the ideal height and weight charts based on the entire population.
Wouldn't it be more effective to establish your baseline weight and other multiple measures initially so you can see the progress you are making?
This is similar to the baseline measurements that a company should take (as well as the multiple measures) that are necessary to meet your company's performance improvement goals for financial success, operational excellence, or risk reduction.
Cal Beyer collaborated with Greg Stefan in writing this article. Greg is Assistant Vice President, Construction Risk Control Solutions, at Arch Insurance Group. As a member of the Southeast Regional team in Atlanta, GA, Greg supports underwriting and claims in risk selection, claim mitigation, and risk improvement activities. He is also responsible for high-risk liability risk reduction initiatives including contractual risk transfer, construction defect prevention, and work zone liability management.
1 Baradan, Selim, and Usmen, Mumtaz A., “Comparative Injury and Fatality Risk Analysis of Building Trades,” Journal of Construction Engineering and Management, May 2006, pp. 533-539.