Tag Archives: ethics

How to Get Ahead of the Watchdogs

The compliance and ethics functions within insurance organizations face continued regulatory pressure. But, nowadays, they must also deal with new threat vectors that are shaping a higher-stakes global compliance environment. More and more, investigative journalists are analyzing big data to spot fraud as well as compliance violations. Third-party agencies are increasingly using technology to identify incidents and monitor corporate behavior. Enforcement agency whistleblower programs are motivating employees to speak out about perceived violations. And, rapidly escalating grassroots campaigns, such as the #metoo movement, are making strong corporate culture and rapid-response capabilities even more critical. When these watchdogs form the genesis of a complaint, social media channels and the round-the-clock news cycle can rapidly increase awareness of the incident – in some cases even before the company itself is aware.

Compliance functions need the agility to adjust to business changes and to the inevitable surprises inherent in a dynamic business climate. But, without a strong technological underpinning to help them operate efficiently in real time, it will be challenging, if not impossible, to get ahead of new threat sources and changing business dynamics. From dashboards for improved decision-making, to sophisticated tools for monitoring employee compliance, to training informed with data from compliance monitoring, technology-based capabilities are now cornerstones of effective compliance management. By using the best available tools and information to protect their organizations and to scan the horizon for new requirements, trends and risks, compliance functions can keep pace with their organizations’ changing compliance needs.

But as a group, insurance sector compliance functions have some work to do on the technology front. According to the PwC 2018 State of Compliance study, only 41% of insurance organizations use policy management technology within the compliance department (compared with 44% across industries and 54% in banking, for example). Just 47% use technology to monitor employees’ compliance with ethics and compliance-related policies and procedures (compared with 50% across industries and 52% in banking). While progress is being made, it lags that of certain other industries.

See also: How to Collaborate With Insurtechs  

However, our study identified 17% of insurance survey respondents as “Leaders,” where executives were very satisfied with the effectiveness of their organization’s compliance program. This is on par with other industries in the study. The study’s overall Leader group shares a common denominator: Leaders take a more comprehensive and current approach to compliance risk management as enabled by technology. Leaders differ substantially from their peers in many of the operational aspects of compliance risk management, including executing differently in four key ways.

Leaders invest in tech-enabled infrastructure to support a modern, data-driven compliance function. Technology helps organizations manage compliance in a dynamic and expansive risk universe. Leaders more often use data analytics tools, dashboards and continuous monitoring than their peers. More than half (54%) of Leaders in the study use data analysis tools, and nearly half have dashboards (49%) and engage in continuous compliance monitoring (48%). The effective use of cloud infrastructure, machine learning, advanced analytics and natural-language processors help organizations quickly analyze vast amounts of data and gain insights into business and customer behaviors, assess potential compliance issues and cost-effectively meet risk and regulatory challenges.

Leaders increase compliance-monitoring effectiveness through the use of technology and analytics. Analytics, together with automation technologies, make the continuous monitoring of employee compliance across many areas of the business far more feasible. Two-thirds (66%) of Leaders use technology to monitor employees’ compliance with ethics- and compliance-related policies and procedures. And they more often use technology to monitor specific risk categories, such as fraud, gifts and entertainment, privacy, social media and trade compliance. Leaders are also gleaning more benefits from technology use in monitoring efforts – compared with their less effective peers, they are more responsive and even proactive in mitigating compliance issues.

Leaders streamline policy management to increase responsiveness and boost policy and procedure effectiveness. Leaders take several steps to strengthen their policy management. They more often keep their codes of conduct, policies and procedures current and make them easily accessible across the organization. They also more often enable this streamlining through policy management technology, such as GRC tools, and measure the effectiveness of policies and procedures more comprehensively. Nearly two-thirds use technology to facilitate the policy management process.

Leaders take advantage of information and technology to provide targeted, engaging and up-to-date compliance training. Leaders’ compliance training and communications are more comprehensive and current. They are often using multiple sources of information to inform and target their training and are thinking creatively about new ways to digitally engage employees in training activities. Leaders’ approaches to training positively affect their organizations’ overall risk profile as they aim to minimize activities that potentially place the organization at higher risk.

See also: Guide for Insurtech Work With Carriers  

Effective compliance risk management must be grounded in strategy and business engagement. Establishing the right tone at the top, assessing compliance and ethics risks and building governance structures that provide high levels of confidence in regulatory matters are all critical to effective compliance leadership. But operational aspects of compliance are where the rubber meets the road. With multiple new, highly motivated watchdogs now providing their own forms of oversight, the case for strengthening compliance risk management through technology is strong. Technology is more critical than ever in building programs that boost compliance program value, better manage risks and drive cost-effective compliance.

How to Get Insurance Viewed as Profession

As it is every year, March is Ethics Awareness Month for the insurance industry. Click here for my March 2017 article on this, though the poll I cite does not appear to have been updated for recent public opinion about the insurance industry. The CPCU Society usually leads the way on this, and you can get more information here. I wrote about the CPCU code of ethics in an article I called “The 7 Habits of Insurance Professionals.”

Those who view insurance as a career rather than a job probably think of themselves as professionals. As to what specifically constitutes a “professional,” here are some criteria from Ron Horn in an old CPCU text:

7 Characteristics of a Profession

  1. Commitment to high ethical standards
  2. Prevailing attitude of altruism
  3. Mandatory educational preparation
  4. Mandatory continuing education
  5. Formal association or society available
  6. Independence to make decisions
  7. Public recognition as a profession

Source: “On Professions, Professionals, and Professional Ethics” by Ronald C. Horn

Based on these criteria, someone CAN be a “professional” in the insurance industry. The biggest stumbling block might be #7 above. Does the typical consumer view, for example, the typical insurance agent as a “professional” akin to their perception of a doctor, attorney, accountant or perhaps clergy member? The answer is almost certainly a resounding “No”… until their insurance claim is denied. At that time, the plaintiff will almost assuredly try to convince a judge or jury that the agent owed a higher standard of care as a professional in his or her field.

See also: Will Insurance Ever See a ‘Killer App’?  

So, how do we begin the process of changing this unprofessional view of our industry, aside from voicing our displeasure with the incessant price-focused shilling that passes for advertising that dominates the media?

The above was largely excerpted from my coming book “When Words Collide: Resolving Insurance Coverage and Claims Disputes.”

Could AI Transform Insurance Ethics?

Could AI be used by regulators to test how committed insurance executives are to building trust with policyholders? Artificial intelligence is transforming the relationship between insurer and insured. And it’s now being used in ways that could transform the relationship between insurer and regulator. It has implications for public trust and executive careers.

It has emerged that a large investment management firm has been using an AI-based form of voice analysis to test the confidence of the chief executives of the firms in which the firm has significant holdings. Called “affect analysis,” it’s being used to detect any disconnects between what the chief executive is saying and the level of confidence with which she’s saying it. The feedback could be used to pinpoint weaknesses around which further questions are raised, or to just automatically adjust an investment or research recommendation.

See also: Strategist’s Guide to Artificial Intelligence

The idea behind this approach should not be something new to insurers. They’ve been using it for some time to analyze how claimants describe the circumstances of their loss, looking for indicators in their voice of a potential fraudster. I experienced such analysis in 2016 while making a claim for a lightning strike on my home.

So what has this to do with the ethics of insurance, then? Well, if an investment manager can analyze the voice of a senior executive in this way, why shouldn’t the regulator do something similar with the same people? The regulator could ask senior executives to talk about their plans, activities and achievements relating to ethical issues like integrity and fairness.

Given that senior executives and key decision makers in U.K. insurance will soon be subject to new regulations that emphasize their individual accountability for ethical culture within their firms, this step would simply be taking an established practice within the sector and applying it to new ends.

A lot will, of course, depend on the questions you ask. If these focus on belief and commitment, then scores could be quite high, but if they focus on actions and outcomes, then some people might struggle.

And remember that U.K. insurers needn’t wait for the regulator on this. The Senior Managers and Certification Regime requires insurers to undertake their own integrity assessment of senior managers and key decision makers. Perhaps affect analysis could form part of that assessment? The results could then be used to configure personal performance plans and learning schedules.
I wrote about the rise of panoptic regulation back in 2015 (link), in which regulators access and analyze real-time decision data in a continual stream from insurers. Putting artificial intelligence to use in this way would be a small but significant part of that wider development, providing regulators with critical insight into the tone from the top in a particular firm.

See also: Why AI Will Eat Insurance  

Perhaps the biggest signal the insurance market could take from developments like this would be that of a regulator becoming more sophisticated, prepared to get more under the skin of those they’re dealing with. Just like insurers are, some might say, in their relationships with policyholders and claimants.

One word of warning, though. It is particularly important that the algorithms underlying this branch of artificial intelligence are properly trained. If that training has been carried out on the voices of the white, male executives who have largely dominated the board rooms of insurance firms to date, then this sort of AI-based analysis would turn into a barrier for the various diversity initiatives underway in the insurance sector at the moment.

20 Likely Changes in Ethics on Claims

Insurance is changing in ways that have profound implications for claims. Some claims practices will become redundant. Questions only occasionally raised before will now become common. New skills will have to be learned.

It’s all very exciting, but also a little daunting. Clearly, the way we think about claims will change, but, at the same time, certain constants will remain: settling claims honestly and fairly, for example.

So what are the changes that have implications for the ethics of insurance claims? I want to look at 20 changes that I think will be significant in terms of the ethical challenges facing claims people.

The “Ask It Never” Policy

As insurers turn from asking questions of the policyholder about the risk to be insured and instead obtain that information through big data, the time of “no questions at all” will approach. What will happen to claims then? If no questions are asked, then non-disclosure becomes obsolete, as does the whole idea of material facts. What will be left for the claims team to review or decide upon?

The Personalized Policy

A personalized policy will, by its very nature, mean that a claim made upon it will result in an increase in premium. As the public comes to increasingly sense this, how will it influence the way in which claimants approach their claim? Should claims people warn potential claimants that their claim will result in an increased premium? Some claimants will self-fund small, valid claims, although those spending patterns will then be picked up by insurers, which could move the premium anyway. Claims may well become more confrontational, as policyholders sold on the idea of personalization find the consequences unpalatable. What can claims people do to maintain trust in such circumstances?

See also: Most Controversial Claims Innovation  

Optimizing Claims Decisions

The trend toward claims settlements being optimized according to what a claimant may be prepared to accept in settlement fundamentally changes key concepts in insurance. What would be a fair claims settlement in such circumstances? And how would “fair” be determined, and by whom? Claims optimization pushes the claims specialist to the margins, although not out of the process altogether, for optimized settlements will raise questions. Someone may be hard up, but not stupid: They will want to know the basis upon which the settlement they’ve been offered has been calculated, and claims people will have to do the explaining.

Correlation and Causation

Insurers are using big data to make decisions about individual claims and claimants. Yet big data analysis relies on identifying significant correlated patterns of loss, while individual claims rely on identifying the causation of a loss. That difference is important, for correlation and causation are not the same. You can’t replace a “one to one” technique like causation with a “one to many” technique like correlation. It would be akin to saying that because your claim is like all those others (which were turned down), then we’re going to turn down your claim, too. Hardly a recipe for fairness. So as the tools of artificial intelligence are increasingly applied to claims processes, the extent to which the decisions being made remain fair will have to be closely monitored, both in terms of inputs and outcomes. How will this be done?

Reasonable Expectations

As data streams all around us (both policyholder and insurer), our ability to understand more about what is happening around us increases. This raises the question of the extent to which a claimant could have reasonable been expected to have been aware of something. If big data knows something, should individual policyholders be expected to know it too? How will insurers start to judge whether a claimant took sufficient notice of something that subsequently influenced the claim?

The Sensor Balance

As homes, offices and factories become covered in sensors, telling you all sorts of information about the property that you were only vaguely aware of before, so then will increase the number of decisions you’ll be called upon to make. There could be some maintenance required to your roof or drains, and unless it’s done soon, then your insurance could be affected. Or perhaps some machinery has been running for longer than usual, in order to meet some new orders, but the sensors are telling you to shut it down for servicing. That knowledge is being recorded and stored, along with the decisions you take in relation to it. All ready then for your insurer to tap into, should there be a claim. Insurers will now have the information to apply those traditional policy clauses relating to maintenance with a new vigor. How will this play out?

The 3 Second Repudiation

The 3 second claims settlement made news for Lemonade, but so will a 3 second claims repudiation. After all, giving people what they want as quickly as possible is a quite different experience to giving people what they don’t want as quickly as possible. How will such repudiation situations be managed, and how might claimants react to an almost instant dismissal of their claim?

A Smart Contract Just for You

Big data, smart contracts and personalized policies that ask no questions of the policyholder all point to a level of individualization that will baffle the typical claimant. A loss covered last time might not be covered next time. A neighbor’s loss may be covered in a quite different way to yours. How do you explain such situations to a claimant who’s knowledge of ‘insurtech’ is zero? If everything is so variable, then might communication turn out to be the claims person’s key skill?

The Automation of Fairness

As claims processes become increasingly automated, insurers will have to take care not to lose sight of their obligations in terms of the fairness of the decisions being made. Some insurers struggle with this even in today’s relatively straightforward workflow processes, so how they will cope with something like artificial intelligence is a concern. Experience points to this being harder as systems become more complex. A lot will depend on the extent to which those in oversight roles bring challenge and critical thinking to the implementation of such projects.


As claims processes become increasingly automated, should the claimant have the right to be told about this? There’s talk of news written by artificial intelligence ‘bots’ soon having to be flagged as ‘artificial news’. Might the same soon apply to individual decisions on things like claims? If so, then from a European perspective, a claimant’s ‘right to know’ might soon become a more complicated request to fulfill.

Upholding Supplier Standards

The consensus is that a typical claim function’s supply chain network will continue to grow for some time. And bringing in all of these exciting and new capabilities is fine, so long as everyone is singing the same tune. Insurers have to abide by the ethics of insurance claims, such as covered in rules for fairness, honesty and integrity. So how can a claims director convince her board of fellow directors that their firm’s ethical obligations are being met every bit as confidently as in more analogue times? Has her due diligence taken account of not just the intelligence and energy of those providers of artificial intelligence solutions, but their integrity as well? It’s a challenge best met earlier on.

Instantaneous Claims

That breed of policies described as ‘mobile, micro and moment’ are all about instant cover for just what you want, when you want it, for as long as you want it, and arranged with a few clicks on your phone. Turn those conveniences around and you have the potential for the instantaneous claim, perhaps only moments after inception – “I bought cover for a bike, got on it, went outside and crashed it”. Such claims have usually been looked upon with suspicion by claims people, on the basis that such a quick loss could not be fortuitous. Yet if you provide cover in this way, why shouldn’t some claims happen in much the same way? This is a change of mindset needed throughout an organization, not just in underwriting.

Managing Complexity

As more cogs, and more complicated cogs, are added to the overall claims process, the greater the challenge it becomes for them to deliver on the promises that were made at the planning stage. This is an existing problem for claims people in the UK, who have acknowledged that the multiplying layers of many claims systems aren’t delivering the expected results. The answer will not come from artificial intelligence working it out for itself: all AI needs to be trained on historical data. So claims people need to understand complexity and how to manage it.

Challenging the Decision

Research by one leading insurer in the UK market found that policyholders are less likely to trust an automated decision than one involving a human. So as claims become more automated, insurers could face an increasing number of challenges from individual claimants, asking how the decision on their claim was reached. How will they explain an output from an increasingly ‘black box’ process? They may be tempted to rely on generalized responses, but that isn’t going to work when the claimant appeals to an adjudication service like the UK’s Financial Ombudsman Service (FOS). Organisations like FOS should be working now on how they can get inside that automation and assess the fairness of the outcomes it has been designed to produce. Will they perhaps look to accredit the overall automation, or rely on case by case use of techniques like fairness data mining. Another factor that insurers need to take into account will be claimants turning to the EU’s General Data Protection Regulation and enforcing their right to access the data upon which the decision on their claim was made. Insurers will need to prepare for this, both in terms of the volume of such requests and the complexity of responding to them. Again, the ability of claims people to communicate complex things will become a key skill.

Provenance of Data

As insurers bring more and more data into their claims processes, especially unstructured data drawn from sources like social media, they will need to be prepared to demonstrate the provenance of that data. In other words, they need to be able answer questions like “where did you get that piece of data from that seems to have been a big influence on my claims decision?” Or “that piece of data is wrong so you need to change your decision.” If you utilize data outside of the context in which it was first disclosed, then the error rate shoots up. Just because a piece of data resides within a system doesn’t establish it as a fact.

Significance in Algorithms

Pulling all sorts of data together is one thing, but the value that claims people draw from all that data comes from the algorithms that weigh up its significance. At what levels the various measures of significance are set will be hugely important for the outcomes that claimants experience. These introduce options that require judgements and such judgements need to overtly account of ethical values like fairness and respect.

See also: How AI Will Transform Insurance Claims  

Segmentation of Claimants

As claims processes become more automated, so claims people are presented with the opportunity to segment the experience of the various claimant types they engage with. Many insurers currently use software to assess claimants at the ‘first notification of loss’ stage and vary the type of experience they receive. At the moment, this is being used to address claims fraud, but it is unlikely to end there. Artificial intelligence coupled with audio and text analysis will allow insurers to segment non-fraud claimants for all sorts of purposes. The challenge for claims people is just how acceptable some of those purposes might be. For example, what if claimants are segmented according to the amount they are prepared to accept as a claims settlement? All of these new technology platforms introduce options, but just because you have the option to do something doesn’t mean that it’s a good thing.

Warnings Ahead

New ways of communicating with policyholders offer up the possibility of advance warnings being given of storms, floods and the like. That brings many benefits to both insurer and policyholder, but it also raises the prospect of those warnings having conditions attached. Rather than advice, they could include requirements linked to continuation of certain elements of cover. If the policyholder doesn’t (for whatever reason) respond to those communications, this then introduces possible conflict zones for subsequent claims.

The Convenience of Clicking

The ease with which cover can be incepted using mobile devices is a great convenience to policyholders at the outset of a policy, but it could turn into a great inconvenience when making a claim. Research shows that we invariably do not read the terms and conditions presented to us when buying a mobile based product or service: it’s just too easy to click accept, especially when the fine print looks even finer on a small screen. So claims people need to be prepared for many more people than at present not knowing about the cover they’ve signed up to, beyond what is indicated by a few well designed icons on a screen.

The Language of Claims

A subtle change of language has emerged in claims circles in recent years. The service element of what’s on offer is being stressed more than the insurance element. While it’s great to see insurers now paying attention to risk management in their personal lines portfolios, this shouldn’t be at the cost of what is at the heart of an insurance product, which is risk transfer. The danger is that this slow and subtle change will not be picked up by customers until they find out when trying to claim that what they’ve bought is largely a service and not insurance.

To conclude. It’s a great time to be in insurance, and I would say even more so in respect of claims, for that is where all the promises inherent in the insurance purchase are fulfilled. Those who recognize the ethics of insurance claims and rise to the challenges outlined above will be those who are trusted in the digital market.

Which Rules Should Insurtech Break?

There’s a lot of attention being given at the moment to the startup firms that are entering the insurance market in the hope of grabbing attention and business by disrupting the established ways of doing things. And some of these insurtech startups are indeed introducing new and exciting ideas to the market. Disruptive thinking has its upside, and customers will benefit from it. Does it have a downside as well, though?

There’s a view that, to be successful, disruptors need to “delight in breaking rules, but not rules that matter.” This view can lend startups a certain piratical air, yet it can also cause them to see the rules that get in their way as the rules that don’t matter. That’s why we’ve seen some high profile insurtech startups crashing into regulatory brick walls: Zenefits is a classic example of this.

Now,  I’m not saying that startups shouldn’t hit problems, even regulatory ones, but what I am saying is that they should at least get the basics right, even if the basics are themselves disruptive to the work of disruptors. The U.K.’s Information Commissioner made this clear to the insurance industry in 2015 when he pointed out that “big data is not a game played by different rules.”

See also: An Eruption in Disruptive InsurTech?  

I’m also not asking for insurtech startups to occupy the high moral ground, but I am saying that they cannot reinvent “doing business” in ways that sidestep the ethical values that consumers expect firms to uphold. Nailing business values like “innovative” and “disruptive” to your piratical mast won’t stop inconvenient winds like “honesty” and “fairness” from pushing your exciting voyage toward the hard rocks of reality.

It is with terms such as honesty and fairness that customers often describe what a “good financial services firm” feels like. Yet insurtech start-ups are often being urged to disrupt customer expectations, seeing them as a quaint left-over from an old way of doing things. The future is instead said to lie in insurance providers getting closer to their customers in all sorts of ways. Yet isn’t business success more reliant on customers wanting to get closer to firms? It’s the latter that leads to the former, not the other way around.

The danger is that disruptors’ natural and essential super-confidence in themselves is translated into overconfidence in the ethical correctness of their decisions and judgments. And there’s then the tendency for them to believe that other people think the same way as they do. Both are fairly normal traits that we all exhibit in some form or other in our everyday lives. I certainly do, and my daughters have pulled me up short with one or two of the decisions I’ve made.

See also: The State of Ethics in Insurance  

And that sort of challenge, that sort of “knowing you but through different eyes” is vital for insurtech startups. While insurance needs disruptive startups, they in turn need disruptors of group think, of the wrong sorts of overconfidence. As the folklore of startups fills with tales of disruptors being told they’re not being overconfident enough in their business plans, let’s put out a marker of hope for 2017, that it will see tales of disruptors being told they’re not being ethical enough in their business plans, that they’re not doing enough to earn the trust of consumers. It’s very possible, if the market and those advising them want it.