Tag Archives: enterprise risk management

Key Indicators of Weak ERM Programs

Almost every insurer has an official list of risks, often referred to as a risk register. Maintaining a risk register is a basic step in managing risks, following risk identification, prioritization, assignment of risk owners and creation of mitigation plans. 

One problem with many risk registers is that they are filled with generic risks. Although these risks may be real ones for the company, their lack of specificity does not contribute to a true understanding of them in the necessary detail or to planning targeted mitigations for them. 

For example, a risk register might show a risk such as “premium receivables may be late, resulting in ‘over 90s’ or uncollectable premiums,” a risk that every insurer has to some degree. However, for the company in question the real risk is “underwriters may have too much discretion to change premium collection terms and conditions leading to ‘over 90s’ or uncollectable premium.” The generic version does not indicate the root cause of the risk and can lead to ineffective mitigation strategies.

Or, a risk register may show a risk as “difficulty in attracting talent for open positions” when the real risk is “social media and internet sites may not present the company in a good light, making it hard to attract talent.” By stating a generic risk, management does not have to admit what it may not want to acknowledge.

Yet another example is a risk register that has stated an IT risk as “too many legacy systems still exist, creating data and service issues,” when the actual risk is “the XYZ underwriting system is not adequately integrated with other systems to create accurate data and seamless processing or a competitive customer experiences.” Not naming the culprit system(s) omits the source and scope of the risk and not adding some modifiers to the effects of the risk omits the true nature of what is at stake if the risk is not addressed.

The more nebulously a risk is characterized, the less clear who should be the risk owner. Without a clear and appropriate risk owner, the greater the chance that the risk will not be adequately addressed.

Regardless of the category of risk, without specifics the entries in many risk registers seem more for external consumption than internal action. If the same list of risks could be adopted by any other insurer of the same size, age and business mix, then it is not fit for purpose for the insurer whose risks it is supposed to represent. It may be fine for an externally published list of risks to lack detail that could be considered proprietary, so long as it meets certain thresholds, but it is not fine for a list intended for internal use.

See also: Risks, Opportunities in the Next Wave  

Another big problem with risk registers is that many do not include the strategic risks the company needs to be concerned about. Strategic risks tend to stem from the vision, mission and goals of the company. A strategic risk might concern the lines of business written or the customer segments targeted or the geographic footprint. For example, a risk for a WC monoline insurer might be “premium volume may shrink significantly in the next five years due to robotics and AI reducing the size of the workforce.” A risk for an “internet only” insurer might be “there may be difficulty reaching sufficient scale because of the lack of barriers to entry by identical competitors and because some buyers will never buy over the internet. Such an insurer will also have a talent risk because of competition for IT talent across all insurers and industries.

Or, a risk for an insurer that has high concentrations in Cat-prone states might be: “Without further geographic expansion, the lack of diversification may hurt profitability significantly.”  

It is simply not common to see these types of strategic risks listed in the risk register. Yet, strategic risks tend to be the most existential of all risks. In the past, some large insurer failures stemmed from strategic risks not being addressed appropriately or at all. For example, risks associated with undisciplined growth or delayed reaction to underperforming books of business, which are strategic risks, have not been recognized by insurers, and such insurers have paid a steep price for that lack of recognition. 

An additional problem with risk registers is the mediocrity of the planned mitigations. A good risk register should minimally show: 1) the risk, 2) its ranking as to impact and likelihood, 3) the risk owner, 4) the planned mitigation and 5) the status of the mitigation efforts at each update of the register.

Undoubtedly, it is key to identify the risks, but identification and recording of the these does nothing to help to the organization unless there is adequate mitigation. Mitigation can take many forms: avoiding, transferring, minimizing or accepting the risk, albeit with a contingency. A planned mitigation that is too weak, too expensive versus the risk or too impossible to implement will not benefit the organization. Worse yet, an inadequate mitigation may allow the risk to grow while the board or senior management thinks it is being reduced.

The mitigations in the register should not be just a recounting of current controls or risk-reducing practices lessening; they should be innovative and robust tactics for attacking the risks.

Boards, senior management and chief risk officers should evaluate their risk registers based on these questions:

  • To what extent are risks stated clearly and specifically?
  • Are there risks included that are unique to the company?
  • Based on how the risk is stated, is it clear who the risk owner should be?
  • Based on how the risk is stated, does it help to pinpoint what type of mitigations are needed?
  • To what extent are strategic risks included?
  • Are there current or emerging strategic risks that are not included?
  • Are the planned mitigations equal to the seriousness of the risks; i.e. are they sufficiently robust? 
  • Is the cost of the planned mitigation in balance with the potential impact of the risk?   
  • Are the planned mitigations attainable, implementable?
  • Is the mitigation plan implementation on track?

Bottom line, a poorly constructed risk register points to a failure of the entire ERM process and practice. As an essential tool for managing risk across the enterprise, it reveals a lot about how well risk is being managed. Thus, the register can be a good indicator of the overall state of ERM in the organization.

A Renewed Focus on EERM Practices

With third-party risks on the rise, there is renewed focus on maturing extended enterprise risk management (EERM) practices within most organizations. This focus appears to be driven by a recognition of underinvestment in EERM, coupled with mistrust of the wider uncertain economic environment.

To understand the broader risk environment and provide organizations with the insights needed to effectively assess their risk and adapt processes accordingly, Deloitte recently conducted the EERM Risk Management Survey 2019, obtaining perspectives from more than 1,000 respondents across 19 countries covering all the major industry segments. Results shed light on crucial considerations surrounding economic and operating environments; investment; leadership; operating models; technology; and affiliate and subcontractor risk. More specifically:

Economic and operating environment: Economic uncertainty continues to drive a focus on cost reduction and talent investment in EERM. The main drivers for investing in third-party risk management are: cost reduction, at 62%, reduction of third-party-related incidents, at 50%, regulatory scrutiny, at 49%, and internal compliance, at 45%. Organizations urgently want to be more coordinated and consistent in extended enterprise risk management across their organization, as well to improve their processes, technologies and real-time management information across all significant risks.

Investment: Piecemeal investment has impaired EERM maturity, left certain risks neglected and hurt core basic tasks. Only 1% of organizations say they address all important EERM issues, and only a further 20% say they address most EERM issues. One of the main reasons for this maturity stall is that organizations are taking a piecemeal approach to investment – they are mostly making tactical improvements rather than investing in strategic, long-term solutions. This piecemeal approach has led to certain areas – such as exit planning and geopolitical and concentration risk – being neglected, and some organizations not doing core basic tasks well, such as understanding the nature of third-party relationships and related contractual terms.

See also: The Globalization of Risk Management  

Leadership: Boards and senior executives are championing an inside-out approach to EERM, which includes better engagement and coordination and smarter use of data. The survey reveals that boards and executive leadership continue to retain ultimate responsibility for EERM in the majority of organizations. Better engagement and coordination across internal EERM stakeholders is a top priority for boards and senior leaders. Boards are moving away from using periodically generated data to more succinct and real-time, actionable intelligence, generated online. But who has ultimate responsibility for third-party risk management? According to the survey results, 24% indicated the chief risk officer, 19% indicated other board members and 17% indicated the CEO.

Operating models: Federated structures are the most dominant operating model for EERM, underpinned by centers of excellence and shared services. More than two-thirds, 69%, of respondent organizations say they adopt a federated model, and only 11% of organizations are now highly centralized, which is down from 17% last year. Investments in shared assessments and utilities, and managed services models, are also increasing. Furthermore, co-ownership of EERM budgets is also emerging as a trend. Robust central oversight, policies, standards, services and technologies, combined with accountability by business unit and geographical leaders, is a pragmatic way to proceed.

Technology: Organizations are streamlining and standardizing EERM technology across diverse operating units. The survey confirms Deloitte’s prediction last year that a three-tiered approach for third-party risk management will continue. Smartly coordinated investments in third-party risk management technology across three tiers can drive efficiency, reduce costs, improve service levels, increase return on equity and create a more sustainable operating model. More specifically, 59% of the respondents adopted tier one, 75% adopted tier two and tier three continues to grow.

Affiliate and subcontractor risk: Organizations have poor oversight of the risks posed by their third parties’ subcontractors and affiliates. The lack of appropriate oversight of subcontractors is making it difficult for organizations to determine their strategy and approach to the management of subcontractor risk. Only 2% of survey respondents identify and monitor all subcontractors engaged by their third parties. And a further 8% only do so for their most critical relationships. Leading organizations are starting to address these blind spots through “illumination” initiatives to discover and understand these “networks within networks.” Less than 32% of organizations evaluate and monitor affiliate risks with the same rigor as they do other third parties. As affiliates are typically part of the same group, organizations are likely to have a higher level of risk intelligence on them than other third parties.

See also: Is There No Such Thing as a Bad Risk?  

For more information on Deloitte’s “2019 Extended Enterprise Risk Management Survey,” or to download a copy, please visit their website here. You can find the full report here.

Risk Culture Revisited: A Case In Point

True, a great deal has been written about the importance of inculcating a positive risk culture if an organization is serious about managing its enterprise risk. Yet, when it comes to discussions about organizational culture, many executives’ eyes glaze over because the topic is too nebulous or because they have no idea how to influence or develop a particular type of culture. Underwriters, considering an application from a commercial customer, generally do not look too deeply into the company’s risk culture. Given that risk is growing in magnitude and variety and with increasing speed of onset, it behooves leaders to take concrete actions to establish a sound risk culture or to maintain one if it already exists. And underwriters should also be interested in the risk culture of accounts they write for the same reasons.

Often, I am inspired to write about something because of some news I hear or read about. In this case, something on the law360 website caught my attention: A woman slipped and fell near a collapsed “wet floor” sign at a casino. This person, Ms. Sadowski, suffered serious injuries and was awarded $3 million by an Ohio jury.

“The sign lay flat on the floor that day in September 2016, and a Jack Cincinnati Casino employee even walked around it but did not pick it up,” Sadowski’s attorney, Matt Nakajima, said, according to the Cincinnati Enquirer. He said that, moments later, Sadowski tripped over it and broke one of her knee caps. There were no safety measures in place for floor inspections or fall prevention, he said, and the employee who walked around the collapsed sign was not reprimanded. So, despite the use of “wet floor” signs, other aspects of risk management were purportedly absent.

It seems the jury believed Nakajima’s description. If the description is accurate, the part about an employee walking around a collapsed “wet floor” sign is very troubling, as is the fact that there were no consequences for the employee. These kinds of actions point to a lack of a risk aware culture at various levels.

See also: Building a Risk Culture Is Simple–Really  

So, how do leaders build a risk culture and how do underwriters probe to see what kind of risk culture exists in their prospective insureds’ organizations.

Three Basic Steps to Build Risk Culture

  • Articulate the organization’s position on managing risk at key communication junctures and through different media with employees: 1) hiring interview, 2) orientation, 3) staff meetings, 4) webcasts, newsletters, bulletin boards.
  • Include a risk culture criterion in all performance reviews; e.g., does the employee perform duties safely and address or report hazards/risks when they are identified? Evaluate positively or negatively, as warranted. Celebrate exemplary cases of risk awareness or risk mitigation.
  • Ensure that policies, procedures and work instructions all describe what is expected in terms of safety, precaution and risk reporting

Three Basic Data Points for Underwriters to Ascertain

  • Does the organization have any losses in the loss history that show an egregious lack of risk awareness?
  • Does the organization practice ERM or, at least, have policies around required safety measures, risk/hazard reporting, training on avoiding cyber and other risks, etc.?
  • Does the organization discuss or evaluate risk awareness as part of normal performance management?

At a time when every insurer is streamlining the information it requests from potential insureds, adding more requests for data seems antithetical. However, in light of the thousands of ways that employees can create, increase or decrease risk in an organization, the culture they embrace is very important. For example, an HR staffer who delays inputting an employee termination to the appropriate systems can create huge data and physical security risks. Likewise, a factory worker who leaves equipment running while going on break, when it should be turned off, can create safety and property risk. Or, consider a finance employee who thinks a spoofed email is actually from the CEO and sends a payroll check to the hacker’s account because there was no secondary control or it was not adhered to. The questions above will help underwriters to get a glimpse of the risk culture at the company they are evaluating.

See also: Thinking Differently: Building a Risk Culture  

A risk aware culture plays a role regardless of the category of risk: financial, operational, legal, cyber, human resource, strategic, etc. Everyone from the top to the bottom of the organization needs to have an automatic and quickfire gut check regarding their actions – am I creating a risk by taking this action; have I recognized the risks in the situation that is leading me to action; do I need to vet a recognized risk with others? When an organization reaches the point where this type of thinking is natural, and almost universal, then it can be said that a positive risk culture has been embedded.

Her latest book, “Enterprise Risk Management: Straight Talk for Nonprofits,” can be found here.

Integrating Cyber Risk in ERM Framework

Enterprise risk management (ERM) is often viewed as a bureaucratic and unnecessary process, subtly or overtly motivated by regulation, accompanied by internal risk leadership kingdom building and suggesting an unclear value proposition. Occasionally, these perceptions are correct, and ERM fails. Yet, there is hope for a successful ERM approach with the right motivations and when designed and implemented with the real business goals and culture of the organization in mind. This is when ERM becomes an invaluable approach to learning about and managing truly destructive risks. A successful ERM approach also creates a clearer lens for seeing and responding to emerging risks, including potential impacts, and helping to prioritize the more valuable solutions. The resulting ERM processes are, however, often fraught with hurdles, preventing many organizations from achieving a level of risk astuteness and maturity beyond ad-hoc decision making.

Few risks affect organizations with the diversity, impact and pervasiveness of cyber. As we are now a truly internet-connected and -dependent world, few organizations escape material exposure to this ever-evolving risk and its wide range of impacts; fewer still seem to have effective plans for cyber risk mitigation or an ability to calculate the value “in play” gained, or not, from their cybersecurity strategies. This is not to say many organizations haven’t addressed or aren’t trying to address cyber risk. Beyond regulatory requirements, no effective governance structure today would allow management to ignore or not actively investigate this growingly complex enterprise-wide risk. Even so, why would cybersecurity become a clarion call for ERM? What role does ERM play in helping to solve the cyber dilemma, and to assess this critical cross enterprise risk? We are glad you asked.

Every organization should approach risk management in a way that is effective for itself and its key stakeholders, both internal and external. This sounds good but, as mentioned, is hard to accomplish. ERM often means something much less than a comprehensive, multi-step framework and numerous processes addressing a full gamut of ERM components. ERM should at least mean, however, that those elements that most meaningfully contribute to solving the problem (i.e. understanding and controlling the risk) are employed. Certainly, at a minimum, this means identifying and valuing the significance of the exposure, treating it appropriately and then monitoring its status until it is no longer a significant threat. However, is it necessary to first build a risk culture, create a risk appetite, implement a risk tolerance strategy, appoint risk liaisons across the business, establish ERM committees and invest in sophisticated risk modeling?

Likely not, unless your key stakeholders suggest or regulation requires otherwise. ERM processes can easily become overly complicated and burdensome, often working to slow or complicate risk identification and mitigating responses and unnecessarily constraining the business. Further, many ERM processes focus repetitively on risks with a potential for the most obvious and severe impacts (larger inherent risks), sacrificing an ability to otherwise tease out emerging risks and those subtle, often related, frequency risk impacts (lower-level risks), which may be slowly (or rapidly) correlating across the business. ERM frameworks primarily focused on a severity approach, unfortunately, result in a blurry ERM lens and may inadvertently expose the organization to emerging and systemic risk blind-spots. A good example of an emerging risk blind-spot is the various risks found today within a category of risks associated with information security (i.e. cyber risks).

See also: Why Risk Management Is a Leadership Issue

Cyber risks are a notably different type, when compared with the types of risks historically addressed within an enterprise-wide risk management framework. Why? Cyber risk management is analogous to identifying and responding to risk impacts from multiple, simultaneous “smart tornadoes” (e.g., advanced persistent threats).

For example, consider these two facts: 1) cyber risk can be high-frequency and low-severity, or high-frequency and high-severity, at the same time; and 2) cyber risk “impacts” vary widely depending on complexity of known and unknown harm administered, success rate of harm administered and internal acceleration of any such harm (dwell time, lateral movement, then organizational detection and response). These variables create an infinite number of impacts and costs, matrixed across a business.

This is an unusual risk behavior, to say the least, and today’s dynamic cyber risk ecosystem creates a delicate challenge for many in the information security profession. When a person proclaims (or attests, or suggests) “don’t worry, we have cyber risk covered” (e.g., managed or otherwise solved for), then she is suggesting an ability to see the future. In other words, she is implying that she generally knows how those smart cyber tornadoes are going to behave outside, inside and throughout the business, every day.

Admittedly, for most, it is difficult to acknowledge what we do not know and, especially, the vulnerability we may have in facing a first-of-its kind risk management challenge – with various risks we are unlikely to completely mitigate. However, as more and more businesses engage cloud service providers and increase use cases for Internet of Things (IoT) endpoints, organizational key stakeholders, such as boards of directors, regulators and rating agencies, are becoming increasingly concerned about how organizations are identifying gaps in cybersecurity efforts. There is movement by these stakeholders to test and confirm that risk management processes are in effect and that the enterprise is identifying and responding to risks associated with those smart cyber tornadoes.

It is important to understand that even if an organization believes it “has cyber risk covered” by virtue of its current information security (‘InfoSec’) approach, there is still, for many, a critical regulatory requirement to assess the cybersecurity risk itself. Failure to adequately identify, test, monitor, trend and report on enterprise-wide cyber risks creates significant financial, regulatory, reputational and operational exposure for the organization. Static reports that capture log data but are not otherwise normalized or matched to enterprise risk profiles and controls are arguably not offering complete or robust information to the enterprise, for either historical or prospective time periods. And, when we say a risk is managed, it is important to note we are applying a risk management term of art – regulators often have definitions and tests to demonstrate assurance.

Managing a risk means identifying, tracking, scoring and valuing, normalizing and trending risk performance, including the net impacts. These steps are performed in accordance with compliance standards and aligned with risk tolerance. Management also includes evaluating how the risk profile (e.g., an enterprise grouping of all defined cyber risks) is changing over time (and we know it is changing) and what key risk impacts the organization is facing from the portfolio of (cyber) risks. This is where the ERM framework and ERM processes can help.

The existence of an ERM framework does not provide a carte blanche solution for cyber risk management or mitigation of undesirable cyber risk outcomes. Instead, consider ERM a distinct, enterprise-wide enabler for addressing cyber risk management. In many cases, in-force ERM processes and protocols provide the “plumbing” that InfoSec leaders can immediately access and rely on to deploy quick(er) cyber risk identification, monitor the effects of specific risk mitigation strategies and capture and analyze overall enterprise-wide cybersecurity results.

The interplay between ERM and InfoSec serves a critical function for the business. It helps to optimize risk management resources to ensure the InfoSec team is able to focus on the cybersecurity battle at hand. Hacker-driven intrusions and internal actors, along with many other threat vectors and attack surfaces, keep the InfoSec community scrambling for the best depth of defense and tactical offenses required to maintain uptime productivity, lower dwell times, accelerate responses and ensure overall data governance. Meanwhile, together with ERM, InfoSec faces global regulation of personal data actively shifting underfoot, resulting in increasing complexities and wider adoption of cybersecurity regulatory standards.

These newly enacted regulatory standards are providing regulators with an ability to dig deep and assess enterprise-wide cybersecurity risk management. For instance, the National Association of Insurance Commissioners recently said:

“State insurance regulators have undertaken a number of steps to enhance data security expectations to ensure these entities are adequately protecting this information. As part of these efforts, the NAIC developed Principles for Effective Cybersecurity that set forth the framework through which insurance regulators will evaluate efforts by insurers, producers, and other regulated entities to protect consumer information entrusted…(sic)”

Additionally, the New York Department of Financial Services recently said:

“Given the seriousness of the issue and the risk to all regulated entities, certain regulatory minimum standards are warranted, while not being overly prescriptive so that cybersecurity programs can match the relevant risks and keep pace with technological advances. Accordingly, this regulation is designed to promote the protection of customer information as well as the information technology systems of regulated entities. This regulation requires each company to assess its specific risk profile and design a program that addresses its risks in a robust fashion. Senior management must take this issue seriously and be responsible for the organization’s cybersecurity program and file an annual certification confirming compliance with these regulations. A regulated entity’s cybersecurity program must ensure the safety and soundness of the institution and protect its customers.”

It important to note both regulatory agencies are concerned with evaluating enterprise-wide cybersecurity risk – which, in turn, leads us back to the enterprise-wide risk management “plumbing” and risk governance processes and how the ERM-InfoSec interplay can be helpful in achieving organizational risk management objectives.

As an example, we can consider how to use the NIST-CSF (National Institutes of Standard and Technology – Cybersecurity Framework) as a starting point for an enterprise-wide cyber risk identification exercise. The NIST framework offers a diagnostic approach for assessing an organization’s technical cyber risk profile (the current state) versus desired risk tolerance and outcomes (the target state).

Separately, using a similar approach, ERM can be assessed through commonly adopted risk maturity evaluative frameworks. One such framework is the RIMS Risk Management Maturity model (RIMS-RMM). This model shares several diagnostic themes with the NIST CSF, including evaluations of risk identification, risk culture, risk resiliency and risk governance. (National Association of Insurance Commissioners, 2014)

See also: How Insurtech Boosts Cyber Risk  

The common themes between several functional topics within the two frameworks create an opportunity to explore the corollaries between the two frameworks. Scores can be mapped and linked, effectively creating an integrated overall score, by applying relativity factors that capture the directional relationships between the two frameworks. For instance, how might low technical cyber risk scores, such as weak DLP oversight, inform and potentially change the ERM score addressing risk (data) governance? When properly integrated, the NIST CSF and RIMS RMM provide a synchronized view on data governance, privacy and enterprise-wide cybersecurity performance.

An integrated analysis, such as a combined NIST CSF plus RIMS RMM approach, helps an organization accelerate their ERM and InfoSec risk management performance and increases risk awareness. In turn, increasing risk awareness leads to becoming more risk astute. When an organization is more risk astute, it is maturing in its risk management thinking, as evidenced by positive return on risk investments and system-wide risk mitigation solutions prioritized and finely attuned to best support organizational growth and profitability. Most importantly, they are increasing their cyber resiliency while deploying strategic cyber risk management.

The company that successfully integrates a robust cyber risk management approach and its ERM framework is at a distinct competitive advantage. Not only is such an organization effectively managing its resources and expenses; it is linking cyber security to its business goals, enterprise risk profile and strategic vision.

How to Avoid Failed Catastrophe Models

Since commercial catastrophe models were introduced in the 1980s, they have become an integral part of the global (re)insurance industry. Underwriters depend on them to price risk, management uses them to set business strategies and rating agencies and regulators consider them in their analyses. Yet new scientific discoveries and claims insights regularly reshape our view of risk, and a customized model that is fit-for-purpose one day might quickly become obsolete if it is not updated for changing business practices and advances in our understanding of natural and man-made events in a timely manner.

Despite the sophisticated nature of each new generation of models, new events sometimes expose previously hidden attributes of a particular peril or region. In 2005, Hurricane Katrina caused economic and insured losses in New Orleans far greater than expected because models did not consider the possibility of the city’s levees failing. In 2011, the existence of a previously unknown fault beneath Christchurch and the fact the city sits on an alluvial plain of damp soil created unexpected liquefaction in the New Zealand earthquake. And in 2012, Superstorm Sandy exposed the vulnerability of underground garages and electrical infrastructure in New York City to storm surge, a secondary peril in wind models that did not consider the placement of these risks in pre-Sandy event sets.

Such surprises affect the bottom lines of (re)insurers, who price risk largely based on the losses and volatility suggested by the thousands of simulated events analyzed by a model. However, there is a silver lining for (re)insurers. These events advance modeling capabilities by improving our understanding of the peril’s physics and damage potential. Users can then often incorporate such advances themselves, along with new technologies and best practices for model management, to keep their company’s view of risk current – even if the vendor has not yet released its own updated version – and validate enterprise risk management decisions to important stakeholders.

See also: Catastrophe Models Allow Breakthroughs  

When creating a resilient internal modeling strategy, (re)insurers must weigh cost, data security, ease of use and dependability. Complementing a core commercial model with in-house data and analytics and standard formulas from regulators, and reconciling any material differences in hazard assumptions or modeled losses, can help companies of all sizes manage resources. Additionally, the work protects sensitive information, allows access to the latest technology and support networks and mitigates the impact of a crisis to vital assets – all while developing a unique risk profile.

To the extent resources allow, (re)insurers should analyze several macro- and micro-level considerations when evaluating the merits of a given platform. On the macro level, unless a company’s underwriting and claims data dominated the vendor’s development methodology, customization is almost always desirable, especially at the bottom of the loss curve where there is more claim data; if a large insurer with robust exposure and claims data is heavily involved in the vendor’s product development, the model’s vulnerability assumptions and loss payout and developments patterns will likely mirror that of the company itself, so less customization is necessary. Either way, users should validate modeled losses against historical claims from both their own company and industry perspectives, taking care to adjust for inflation, exposure changes or non-modeled perils, to confirm the reasonability of return periods in portfolio and industry occurrence and aggregate exceedance-probability curves. Without this important step, insurers may find their modeled loss curves differ materially from observed historical results, as illustrated below.

A micro-level review of model assumptions and shortcomings can further narrow the odds of a “shock” loss. As such, it is critical to precisely identify risks’ physical locations and characteristics, as loss estimates may vary widely within a short distance – especially for flood, where elevation is an important factor. When a model’s geocoding engine or a national address database cannot assign location, there are several disaggregation methodologies available, but each produces different loss estimates. European companies will need to be particularly careful regarding data quality and integrity as the new General Data Protection Regulation, which may mean less specific location data is collected, takes effect.

Equally as important as location is a risk’s physical characteristics, as a model will estimate a range of possibilities without this information. If the assumption regarding year of construction, for example, differs materially from the insurer’s actual distribution, modeled losses for risks with unknown construction years may be under- or overestimated. The exhibit below illustrates the difference between an insurer’s actual data and a model’s assumed year of construction distribution based on regional census data in Portugal. In this case, the model assumes an older distribution than the actual data shows, so losses on risks with unknown construction years may be overstated.

There is also no database of agreed property, contents or business interruption valuations, so if a model’s assumed valuations are under- or overstated, the damage function may be inflated or diminished to balance to historical industry losses.

See also: How to Vastly Improve Catastrophe Modeling  

Finally, companies must also adjust “off-the-shelf” models for missing components. Examples include overlooked exposures like a detached garage; new underwriting guidelines, policy wordings or regulations; or the treatment of sub-perils, such as a tsunami resulting from an earthquake. Loss adjustment difficulties are also not always adequately addressed in models. Loss leakage – such as when adjusters cannot separate covered wind loss from excluded storm surge loss – can inflate results, and complex events can drive higher labor and material costs or unusual delays. Users must also consider the cascading impact of failed risk mitigation measures, such as the malfunction of cooling generators in the Fukushima nuclear power plant after the Tohoku earthquake.

If an insurer performs regular, macro-level analyses of its model, validating estimated losses against historical experience and new views of risk, while also supplementing missing or inadequate micro-level components appropriately, it can construct a more resilient modeling strategy that minimizes the possibility of model failure and maximizes opportunities for profitable growth.

The views expressed herein are solely those of the author and do not reflect the views of Guy Carpenter & Company, LLC, its officers, managers, or employees.

You can find the article originally published on Brink.