Tag Archives: prejudice

How AI Can Vanquish Bias

Insurance is the business of assessing risks and pricing policies to match. As no two people are entirely alike, that means treating different people differently. But how to segment people without discriminating unfairly?

Thankfully, no insurer will ever use membership in a “protected class” (race, gender, religion…) as a pricing factor. It’s illegal, unethical and unprofitable. But, while that sounds like the end of the matter, it’s not.

Take your garden-variety credit score. Credit scores are derived from objective data that don’t include race and are highly predictive of insurance losses. What’s not to like? Indeed, most regulators allow the use of credit-based insurance scores, and in the U.S. these can affect your premiums by up to 288%. But it turns out there is something not to like: Credit scores are also highly predictive of skin color, acting in effect as a proxy for race. For this reason, California, Massachusetts and Maryland don’t allow insurance pricing based on credit scores.

Reasonable people may disagree on whether credit scores discriminate fairly or unfairly—and we can have that debate because we can all get our heads around the question at hand. Credit scores are a three-digit number, derived from a static formula that weighs five self-explanatory factors.

But in the era of big data and artificial intelligence, all that could change. AI crushes humans at chess, for example, because it uses algorithms that no human could create, and none fully understand. The AI encodes its own fabulously intricate instructions, using billions of bits of data to train its machine learning engine. Every time it plays (and it plays millions of times a day), the machine learns, and the algorithm morphs.

What happens when those capabilities are harnessed for assessing risk and pricing insurance?

Many fear that such “black box” systems will make matters worse, producing the kind of proxies for race that credit scores do but without giving us the ability to scrutinize and regulate them. If five factors mimic race unwittingly, some say, imagine how much worse it will be in the era of big data!

But, while it’s easy to be alarmist, machine learning and big data are more likely to solve the credit score problem than to compound it. You see, problems that arise while using five factors aren’t multiplied by millions of bits of data—the problems are divided by them.

To understand why, let’s think about the process of using data to segment—or “discriminate”—as evolving in three phases.

Phase 1:

In Phase 1 all people are treated as though they are identical. Everyone represents the same risk and is therefore charged the same premium (per unit of coverage). This was commonplace in insurance until the 18th century.

Phase 1 avoids discriminating based on race, ethnicity, gender, religion or anything else for that matter, but that doesn’t make it fair, practical or even legal.

One problem with Phase 1 is that people who are more thoughtful and careful are made to subsidize those who are more thoughtless and careless. Externalizing the costs of risky behavior doesn’t make for good policy, and isn’t fair to those who are stuck with the bill.
Besides, people who are better-than-average risks will seek lower prices elsewhere – leaving the insurer with average premiums but riskier-than-average customers (a problem known as “adverse selection”). That doesn’t work.

Finally, best intentions notwithstanding, Phase 1 fits the legal textbook definition of “unfair discrimination.” The law mandates that, subject to “practical limitations,” a price is “unfairly discriminatory” if it “fails to reflect with reasonable accuracy the differences in expected losses.” In other words, within the confines of what’s practical, insurers must charge each person a rate that’s proportionate to the person’s risk.

Which brings us to Phase 2.

Phase 2:

Phase 2 sees the population divided into subgroups according to their risk profile. This process is data-driven and impartial, yet, as the data are relatively basic, the groupings are relatively crude. Phase 2—broadly speaking—reflects the state of the industry today, and it’s far from ideal.

Sorting with limited data generates relatively few, large groups—and two big problems.

The first is that the groups may serve as proxies that affect protected classes. Take gender. Imagine, if you will, that women are—on average—better risks than men (say the average risk score for a woman is 40, on a 1-100 scale, and is 60 for men). We’d still expect many women to be sub-average risks, and many men to be better than average.

So while crude groupings may be statistically sound, Phase 2 might penalize low-risk men by tarring all men with the same brush.

The second problem is that—even if the groups don’t represent protected classes—responsible members of the group are still made to pay more (per unit of risk) than their less responsible compatriots. That’s what happens when you impose a uniform rate on a nonuniform group. As we saw, this is the textbook definition of unfair discrimination, which we tolerate as a necessary evil, born of practical limitations’ But the practical limitations of yesteryear are crumbling, and there’s a four-letter word for a “necessary evil” that is no longer necessary…

Which brings us to Phase 3.

Phase 3:

Phase 3 continues where Phase 2 ends: breaking monolithic groups into subgroups. Phase 3 does this on a massive scale, using orders of magnitude more data, which machine learning crunches to produce very complex multivariate risk scores. The upshot is that today’s coarse groupings are relentlessly shrunk, until—ultimately—each person is a group of one. A grouping that in Phase 2 might be a proxy for men, and scored as a 60, is now seen as a series of individuals, some with a risk score of 90, others of 30 and so forth. This series still averages a score of 60—but, while that average may be applied to all men in Phase 2, it’s applied to none of them in Phase 3.

In Phase 3, large groups crumble under the weight of the data and the crushing power of the machine. Insurance remains the business of pooling premiums to pay claims, but now each person contributes to the pool in direct proportion to the risk the person represents—rather than the risk represented by a large group of somewhat similar people. By charging every person the same, per unit of risk, we sidestep the inequity, illegality and moral hazard of charging the careful to pay for the careless, and of grouping people in ways that serve as a proxy for race, gender or religion. It’s like we said: Problems that arise while using five factors aren’t multiplied by millions of bits of data—the problems are divided by them.

Insurance Can Tame AI

It’s encouraging to know that Phase 3 has the potential to make insurance fairer, but how can we audit the algorithm to ensure it actually lives up to this promise? There’s been some progress toward “explainability” in machine learning, but, without true transparency into that black box, how are we to assess the impartiality of its outputs?

By their outcomes.

But we must tread gingerly and check our intuitions at the door. It’s tempting to say that an algorithm that charges women more than men, or black people more than white people, or Jews more than gentiles is discriminating unfairly. That’s the obvious conclusion, the traditional one, and—in Phase 3—it’s likely to be the wrong one.

Let’s say that I am Jewish (I am) and that part of my tradition involves lighting a bunch of candles throughout the year (it does). In our home, we light candles every Friday night and every holiday eve, and we’ll burn through about 200 candles over the eight nights of Hanukkah. It would not be surprising if I, and others like me, represented a higher risk of fire than the national average. So, if the AI charges Jews, on average, more than non-Jews for fire insurance, is that unfairly discriminatory?

It depends.

It would definitely be a problem if being Jewish, per se, resulted in higher premiums whether or not you’re the candle-lighting kind of Jew. Not all Jews are avid candle lighters, and an algorithm that treats all Jews like the “average Jew,” would be despicable. That, though, is a Phase 2 problem.

A Phase 3 algorithm that identifies people’s proclivity for candle lighting, and charges them more for the risk that this penchant actually represents, is entirely fair. The fact that such a fondness for candles is unevenly distributed in the population, and more highly concentrated among Jews, means that, on average, Jews will pay more. It does not mean that people are charged more for being Jewish.

It’s hard to overstate the importance of this distinction. All cows have four legs, but not all things with four legs are cows.

The upshot is that the mere fact that an algorithm charges Jews—or women, or black people—more on average does not render it unfairly discriminatory. Phase 3 doesn’t do averages. In common with Dr. Martin Luther King, we dream of living in a world where we are judged by the content of our character. We want to be assessed as individuals, not by reference to our racial, gender or religious markers. If the AI is treating us all this way, as humans, then it is being fair. If I’m charged more for my candle-lighting habit, that’s as it should be, even if the behavior I’m being charged for is disproportionately common among Jews. The AI is responding to my fondness for candles (which is a real risk factor), not to my tribal affiliation (which is not).

So if differential pricing isn’t proof of unfair pricing, what is? What outcome is the telltale sign of unfair discrimination in Phase 3?

Differential loss ratios.

The “pure loss ratio” is the ratio of the dollars paid out in claims by the insurance company, to the dollars it collects in premiums. If an insurance company charges all customers a rate proportionate to the risk they pose, this ratio should be constant across their customer base. We’d expect to see fluctuations among individuals, sure, but once we aggregate people into sizable groupings—say by gender, ethnicity or religion—the law of large numbers should kick in, and we should see a consistent loss ratio across such cohorts. If that’s the case, that would suggest that even if certain groups—on average—are paying more, these higher rates are fair, because they represent commensurately higher claim payouts. A system is fair—by law—if each of us is paying in direct proportion to the risk we represent.

This is what the proposed Uniform Loss Ratio (ULR) test, tests. It puts insurance in the enviable position of being able to keep AI honest with a simple, objective and easily administered test.

It is possible, of course, for an insurance company to charge a fair premium but then have a bias when it comes to paying claims. The beauty of the ULR test is that such a bias would be readily exposed. Simply put, if certain groups have a lower loss ratio than the population at large, that would signal that they are being treated unfairly. Their rates are too high, relative to the payout they are receiving.

ULR helps us overcome another major concern with AI. Even though machines do not have inherent biases, they can inherit biases. Imagine that the machine finds that people who are arrested are also more likely to be robbed. I have no idea whether this is the case, but it wouldn’t be a shocking discovery. Prior run-ins with the police would, in this hypothetical, become a legitimate factor in assessing property-insurance premiums. So far, so objective.

The problem arises if some of the arresting officers are themselves biased, leading—for example—to an elevated rate of black people being arrested for no good reason. If that were the case, the rating algorithm would inherit the humans’ racial bias: A person wouldn’t pay more insurance premiums for being black, per se, but the person would pay more for being arrested—and the likelihood of that happening would be heightened for black people.

While my example is hypothetical, the problem is very real. Worried about AI-inherited biases, many people are understandably sounding the retreat. The better response, though, is to sound the advance.

You see, machines can overcome the biases that contaminate their training data if they can continuously calibrate their algorithms against unbiased data. In insurance, ULR provides such a true north. Applying the ULR test, the AI would quickly determine that having been arrested isn’t equally predictive of claims across the population. As data accumulate, the “been arrested” group would subdivide, because the AI would detect that for certain people being arrested is less predictive of future claims than it is for others. The algorithm would self-correct, adjusting the weighting of this datum to compensate for human bias.

(When a system is accused of bias, the go-to defense runs something like: “But we don’t even collect information on gender, race, religion or sexual preference.” Such indignation is doubly misplaced. For one, as we’ve seen, systems can be prejudiced without direct knowledge of these factors. For another, the best way for ULR-calibrated-systems to neutralize bias is to actually know these factors.)

Bottom line: Problems that arise while using five factors aren’t multiplied by millions of bits of data—the problems are divided by them.

The Machines Are Coming. Look Busy.

Phase 3 doesn’t exist yet, but it’s a future we should embrace and prepare for. That requires insurance companies to redesign their customer journey to be entirely digital and reconstitute their systems and processes on an AI substrate. In many jurisdictions, how insurance pricing is regulated also must be rethought. Adopting the ULR test would be a big step forward. In Europe, the regulatory framework could become Phase-3-ready with minor tweaks. In the U.S., filing rates in a simple and static multiplication chart for human review doesn’t scale as we move from Phase 2 to 3. At a minimum, regulators should allow these lookup-tables to include a column for a black box “risk factor.” The ULR test would ensure these never cause more harm than good, while this additional pricing factor would enable emerging technologies to benefit insurers and insureds alike.

Nice to Meet You

When we meet someone for the first time, we tend to lump them with others with whom they share surface similarities. It’s human nature, and it can be unfair. Once we learn more about that individual, superficial judgments should give way to a merits-based assessment. It’s a welcome progression, and it’s powered by intelligence and data.

What intelligence and data have done for humanity throughout our history, artificial intelligence and big data can start to do for the insurance industry. This is not only increasingly possible as a matter of technology, it is also desirable as a matter of policy. Furthermore, as the change will represent a huge competitive advantage, it is also largely inevitable. Those who fail to embrace the precision underwriting and pricing of Phase 3 will ultimately be adversely selected out of business.

Insurance is the business of assessing risks, and pricing policies to match. As no two people are entirely alike, that means treating different people differently. For the first time in history, we’re on the cusp of being able to do precisely that.

Jurors and Questions on Insurance Coverage

For most potential jurors, questions of insurance coverage do not usually arise in common conversation. Seldom cut and dried, usually subject to numerous definitions and intricacies, coverage issues can be boring and puzzling for even an experienced adjuster. Asking a lay person to try to classify an “occurrence” as defined by a policy, or whether a third party is covered as an additional insured, may prompt, at best, glazed-over eyes or, even worse, a negative commentary about insurance companies. While it may be best in some situations for a judge to determine the issue of insurance coverage, this is not always possible. Sometimes, coverage questions arise in litigation, and those interpreting policy language and determining the outcome are jurors. If jurors are deciding the issues, certain challenges then arise, such as how to clarify policy language, present a clear and concise argument and overcome negative preconceptions about the insurance industry.

Can the Judge Decide Coverage Issues?

In Louisiana, general rules regarding issues that are triable by a jury are set forth in Louisiana Code of Civil Procedure articles 1731 – 1736. These establish the general rule that a demand for a trial by jury will result in a trial by jury of all issues. However, exceptions to the general rule exist when: (a) the parties stipulate that the jury trial shall be as to certain issues only; (b) a party in his demand specifies the issues to be tried by a jury; or (c) the right to trial by jury as to certain issues does not exist. Where a jury trial has been demanded by one or both parties, the case must be tried by a jury unless both parties consent to trial without a jury or the trial court finds that a right to a trial by jury does not exist.

More particularly, La. C.C. P. art. 1562(D) specifically codified the general principle found in La. C.C. P. art. 1736 requiring a stipulation between or the consent of the parties before the trial judge can order that insurance coverage issues be tried separately, with the “court alone” deciding the issue of insurance coverage.

La. C.C.P. art. 1562(D) states:

“If it would simplify the proceedings or would permit a more orderly disposition of the case or otherwise would be in the interest of justice, at any time prior to trial on the merits, the court may order, with the consent of all parties, a separate trial on the issue of insurance coverage, unless a factual dispute that is material to the insurance coverage issue duplicates an issue relative to liability or damages. The issue of insurance coverage shall be decided by the court alone, whether or not there is to be a jury trial on the issue of liability or damages.”

The leading case on the subject is Citgo Petroleum Corp. v. Yeargin, Inc., 95-1574 (La. App. 3 Cir. 7/3/96), 678 So.2d 936, writ granted, remanded, 96-2000 (La. 11/15/96), 682 So.2d 746 and 96-2007 (La. 11/15/96), 682 So.2d 747. There, the court stated that La. C.C.P. art 1562(D) provided that, if principals of judicial efficiency or justice would be served then the court may order a separate trial on the issue of insurance coverage. However, the trial judge’s discretion is not unfettered. The judge’s ability to take the issue away from the jury is severely restricted because, under the article, all of the following conditions must exist: (1) it would simplify the proceedings, permit a more orderly disposition of the case, or be in the interest of justice; (2) the consent of all parties; (3) the non-existence of a factual dispute material to the coverage issue that duplicates an issue relative to liability or damages; and (4) the order must be rendered before trial on the merits.

Therefore, the requirements set forth in the article effectively leave the judge with no discretion, as it requires the consent of all parties. The court further noted that, while the issue of insurance coverage under an insurance policy is a narrow issue of the law between the alleged insured and the insurer, a jury is not prohibited, by statute or otherwise, from deciding this issue. Further, there is no exception to the right to trial by jury for issues that the trial judge may think are too technical or too complex for the jury to understand. Even if the trial judge believes that he is more capable than the jury of deciding the issue of coverage, he cannot take this issue away from the jury once the issue is included within the scope of issues for which a jury trial was requested, unless the conditions of La. C.C.P. art. 1562(D) are met.

As such, if a trial by jury has been requested, but an insurer is presenting technical questions of coverage and believes that a judge would be best suited to decide the coverage issue, a stipulation or the consent of all parties would be necessary before the judge could take the coverage issue away from the jury. Unfortunately, often the consent of all parties to separately try the coverage issue cannot be obtained, and the insurer is left with a jury to decide intricate and potentially costly coverage issues.

Selecting the Best Jury for Your Coverage Case

If coverage issues must be decided by a jury, the persons who make up that jury can make a difference in the outcome of the case. Questioning prospective jurors in voir dire about their current insurance policies and other contracts can provide some insight into how they view insurance companies and the potential for coverage. People often believe that they are “fully covered” under their insurance policies, and that insurers are large, prosperous companies that should be able to “help out” individuals. However, further questioning can reveal that potential jurors do understand that there are limitations as to what is covered under certain policies and what has been negotiated.

Questioning a potential juror about a policy he may currently have in place, whether that policy has a limit and if he understands that the insurance company would not be required to pay more than that limit, can show that the potential juror does understand some limitations to coverage. Additional questions may involve who the current policies provide coverage to and the limitations on that coverage. Even simple, and almost obvious, questions can help illustrate a potential juror’s understanding of coverage limitations. For example, discussing how an automobile policy might provide coverage for certain damage to an owned vehicle but would not cover general maintenance, oil changes or a monthly car payment can help provide insight into whether an individual may be able to understand the issues and be a constructive juror.

Additionally, general questions regarding the potential jurors’ opinion of insurance companies in general, personal claims experiences or inferences regarding insurers that the potential juror has taken from the media can provide insight into whether the potential juror might be favorable or undesirable from the insurer’s standpoint.

Presentation at Trial – Concise and Comprehensible

After a jury has been selected, helping jurors understand and follow the language and logic of the coverage argument is vital. The following tips may help simplify the coverage case and overcome obstacles when faced with presenting coverage issues to a jury.

1. Walk Jurors Through the Basics

Although often complex, insurance policies are simply contracts. They define a relationship between parties and outline who will do what, when and under what circumstances. Presenting the insurance policy as a simple contract, by identifying the promise between the parties and what each may receive in exchange for their promise, may help jurors be less apprehensive when approaching coverage issues.

A good place to start is with the basics of the policy and how it is structured. Discussing the declarations, insuring agreement, exclusions, definitions, conditions and endorsements allows jurors to get comfortable with the policy. After the policy and its purpose are explained, the specific provisions at issue can be addressed. An effective way to do this is by using demonstrative evidence, such as blowups of certain pages or Power Point presentations illustrating specific language and what it means. Presenting the policy through large exhibits helps break down the technicality for jurors and show that it is a logical and consistent contract.

Further, preparing an exhibit naming and listing the experience of all of the individuals who are involved in creating the policy, the claim investigation, adjustment and the coverage decision shows that time and thought of real individuals went into creating a well-organized document and making a well-thought-out coverage decision.

2. Humanize the Issues

Jurors often bring their own experiences to the courtroom and, sometimes, a bad impression of insurance companies. Further, oftentimes coverage disputes are coupled with bad faith claims, exacerbating the notion that insurance companies are malicious. To overcome these perceived notions and prejudices, it is key to humanize the insurer’s operations and show the jurors that real people have drafted the policies and handled the claims. Showing that the insurer is not just a large, faceless corporation, but individuals making decisions and doing their jobs, will help negate the insured’s presented image of an uncaring, profit-seeking business entity. While testimony from a vice president may be impressive, the agent who issued the policy or the adjuster who handled the claim may help put a more relatable face to the company.

Additionally, many insurers have adopted vision statements outlining a code of ethics or a commitment to the community. Using this at trial, and showing how the company is committed to its values or involved in the community, helps dispel negative ideas of an uncaring corporation.

Lastly, insurers should be careful about attacking the insured’s credibility or positions. While it may be necessary, the way this is presented to the jury can have a big impact and can erroneously further the negative ideas about the insurance company.

3. Show All Negotiations

Jurors will generally understand the concept of “you get what you pay for.” They know that if they contracted with their cable company and pay for only the basic channels, they do not get premium channels, such as HBO. It follows that jurors should understand that if underwriting documents or other evidence show what was discussed and understood between the parties, and this is reflected in the contract, this should be what governs. If evidence of negotiations is available, this should be presented to the jury. This concept may be particularly helpful in litigating commercial policies, where there is usually more negotiation, and in showing the application of policy exclusions.

4. Keep It Simple

As a general rule, the simpler the better. It is important to keep the insurance policy language from sounding too technical. Avoid overuse of legal terms and phrases, as this will only confuse jurors and may cause them to fall back on the generally accepted legal principle that “any ambiguity must be construed against the insurer.” A straightforward presentation, relying on only one or two strong coverage arguments, should be used. Presenting every argument possible is not always the best strategy, as this could bog down the jury and cause them to lose focus. When one or two key arguments are made, the case is tight and allows jurors to concentrate on the big picture, rather than trying to follow several moving parts.

Another tactic that may help bring the issues to a comfortable level is to compare the policy to other contracts jurors may have entered into. Outlining the limits and duties imposed by contracts that jurors may be more familiar with, such as a purchase agreement for a car, or a lease agreement for an apartment, may also help jurors realize that there are also limitations and duties imposed by insurance contracts, just like the contracts with which they are more familiar.

Additionally, working backward from the result being sought provides a road map for a streamlined argument and helps create a unifying theme throughout the litigation. Starting from the verdict form or jury instructions helps to keep concentration on the elements that need to be established or explained.

5. Apply Basic Jury Concepts

Basic concepts of persuasion, which apply to all jury litigation, can also be used effectively in a coverage case. Fairness must be stressed and run as a theme throughout the presentation of the coverage case. Jurors want to be fair and will try their best to do so. Additionally, any obvious weaknesses in the case should be addressed. Holes in the case, if not admitted to or explained, will create doubt.

Presenting a coverage case to a jury is sometimes unavoidable, but need not be too difficult or incomprehensible for jurors. Carefully questioning and selecting potential jurors, along with presenting a simple yet logical argument, while humanizing the insurance company, can help achieve a successful presentation of the case in the courtroom and, with that, a successful result.

Montana Clarifies Notice-Prejudice

On May 29, 2015, the Montana Supreme Court affirmed the application of the notice-prejudice rule in cases of third-party claims for damages. Atlantic Casualty Ins. Co. v. Greytak, 2015 MT 149, OP 14-0412 (Mt. 2015). The rule requires the insurer to establish prejudice as a condition to denying coverage when an insured fails to provide timely notice of a claim.

Background
This case arose from a lawsuit initiated by GTL Inc. against John P. Greytak and Tanglewood Investors Limited Partnership (collectively, Greytak), based on Greytak’s failure to pay GTL for obligations arising from a construction project. In response, Greytak filed construction defect counterclaims against GTL. Greytak and GTL later entered into a settlement whereby GTL would notify its insurer, Atlantic Casualty Insurance Co. (Atlantic), of Greytak’s claims. According to the agreement, if Atlantic did not defend GTL or initiate a declaratory judgment action regarding coverage, then GTL would allow a $624,685.14 judgment to be entered against it and Greytak would pursue Atlantic only for recovery of the judgment. GTL notified Atlantic of Greytak’s counterclaims approximately one month after the agreement with Greytak and approximately one year after GTL first received notice of Greytak’s potential counterclaims.

Atlantic initiated an action in the U.S. District Court for the District of Montana seeking a declaration as to whether it was required to defend or indemnify GTL. The District Court granted Atlantic’s motion for summary judgment and found that (a) Atlantic did not receive timely notice of Greytak’s claims against GTL and that (b) Montana law did not mandate Atlantic to demonstrate prejudice from GTL’s untimely notice. Greytak subsequently appealed to the U.S. Court of Appeals for the Ninth Circuit, which certified the question regarding the application of the notice-prejudice rule in the third-party liability context to the Montana Supreme Court.

Montana Supreme Court Decision
The Supreme Court followed the majority of jurisdictions, and its own ruling issued a week earlier when it adopted the notice-prejudice rule in the first-party context, and held that prejudice must be demonstrated to deny coverage when an insured provides untimely notice of a claim. The court reasoned that the purpose of the notification requirement was to provide the insurer with the opportunity to “defend its interest and to prevent or mitigate adverse judgments.” Additionally, the court noted that Montana public policy required a narrow and strict interpretation of insurance coverage exclusions to accomplish the “fundamental protective purpose” of insurance.

Despite discussing the rationale of the rule, which includes mitigating adverse judgments, the court declined to address the merits of the insurer’s claims of prejudice, reasoning that such determination was outside the scope of the certified question. Significantly, however, two justices issued separate specially concurring opinions, which effectively concluded that when an insurer receives notice of a claim almost a year after the insured engaged in litigation, executed a settlement agreement without the insurer’s knowledge and deprived the insurer of any opportunity to defend its interest and to prevent or mitigate adverse judgments, prejudice is presumed as a matter of law. Moreover, in her special concurrence, Justice Laurie McKinnon proposed a limited exception to the notice-prejudice rule to provide that prejudice to the insurer would be presumed as a matter of law when an insured failed to notify the insurer of a pending lawsuit until after judgment has been entered. 

Implications of the Decision
As a result of the Montana Supreme Court’s holding, Montana courts affirmatively join the majority of jurisdictions that similarly hold that the notice provision of an insurance policy is essentially ineffective to deny coverage for late notice of a claim, unless the insurer can demonstrate that it was prejudiced by the untimely notice. Notwithstanding and based on the Supreme Court’s analysis, if the insurer can establish that it was deprived of the opportunity to defend its interest and to prevent or mitigate adverse judgments or that the delay was not merely technical, then there is sufficient basis to deny coverage.

The court did not specifically state whether its holding was limited to occurrence-based policies, but quoted the “as soon as practicable” notice language from the typical commercial general liability policy, and footnoted that this language did not impose a specific time within which the insured must provide notice. Thus, whether the court would impose the notice-prejudice rule to claims made and reported policies is an open question under Montana law, but given the court’s footnote, it appears it would likely join the majority of jurisdictions that do not require an insurer to demonstrate prejudice resulting from late notice under a claims made and reported policy.