Insurance is changing in ways that have profound implications for claims. Some claims practices will become redundant. Questions only occasionally raised before will now become common. New skills will have to be learned.
It’s all very exciting, but also a little daunting. Clearly, the way we think about claims will change, but, at the same time, certain constants will remain: settling claims honestly and fairly, for example.
So what are the changes that have implications for the ethics of insurance claims? I want to look at 20 changes that I think will be significant in terms of the ethical challenges facing claims people.
The “Ask It Never” Policy
As insurers turn from asking questions of the policyholder about the risk to be insured and instead obtain that information through big data, the time of “no questions at all” will approach. What will happen to claims then? If no questions are asked, then non-disclosure becomes obsolete, as does the whole idea of material facts. What will be left for the claims team to review or decide upon?
The Personalized Policy
A personalized policy will, by its very nature, mean that a claim made upon it will result in an increase in premium. As the public comes to increasingly sense this, how will it influence the way in which claimants approach their claim? Should claims people warn potential claimants that their claim will result in an increased premium? Some claimants will self-fund small, valid claims, although those spending patterns will then be picked up by insurers, which could move the premium anyway. Claims may well become more confrontational, as policyholders sold on the idea of personalization find the consequences unpalatable. What can claims people do to maintain trust in such circumstances?
See also: Most Controversial Claims Innovation
Optimizing Claims Decisions
The trend toward claims settlements being optimized according to what a claimant may be prepared to accept in settlement fundamentally changes key concepts in insurance. What would be a fair claims settlement in such circumstances? And how would “fair” be determined, and by whom? Claims optimization pushes the claims specialist to the margins, although not out of the process altogether, for optimized settlements will raise questions. Someone may be hard up, but not stupid: They will want to know the basis upon which the settlement they’ve been offered has been calculated, and claims people will have to do the explaining.
Correlation and Causation
Insurers are using big data to make decisions about individual claims and claimants. Yet big data analysis relies on identifying significant correlated patterns of loss, while individual claims rely on identifying the causation of a loss. That difference is important, for correlation and causation are not the same. You can’t replace a “one to one” technique like causation with a “one to many” technique like correlation. It would be akin to saying that because your claim is like all those others (which were turned down), then we’re going to turn down your claim, too. Hardly a recipe for fairness. So as the tools of artificial intelligence are increasingly applied to claims processes, the extent to which the decisions being made remain fair will have to be closely monitored, both in terms of inputs and outcomes. How will this be done?
As data streams all around us (both policyholder and insurer), our ability to understand more about what is happening around us increases. This raises the question of the extent to which a claimant could have reasonable been expected to have been aware of something. If big data knows something, should individual policyholders be expected to know it too? How will insurers start to judge whether a claimant took sufficient notice of something that subsequently influenced the claim?
The Sensor Balance
As homes, offices and factories become covered in sensors, telling you all sorts of information about the property that you were only vaguely aware of before, so then will increase the number of decisions you’ll be called upon to make. There could be some maintenance required to your roof or drains, and unless it’s done soon, then your insurance could be affected. Or perhaps some machinery has been running for longer than usual, in order to meet some new orders, but the sensors are telling you to shut it down for servicing. That knowledge is being recorded and stored, along with the decisions you take in relation to it. All ready then for your insurer to tap into, should there be a claim. Insurers will now have the information to apply those traditional policy clauses relating to maintenance with a new vigor. How will this play out?
The 3 Second Repudiation
The 3 second claims settlement made news for Lemonade, but so will a 3 second claims repudiation. After all, giving people what they want as quickly as possible is a quite different experience to giving people what they don’t want as quickly as possible. How will such repudiation situations be managed, and how might claimants react to an almost instant dismissal of their claim?
A Smart Contract Just for You
Big data, smart contracts and personalized policies that ask no questions of the policyholder all point to a level of individualization that will baffle the typical claimant. A loss covered last time might not be covered next time. A neighbor’s loss may be covered in a quite different way to yours. How do you explain such situations to a claimant who’s knowledge of ‘insurtech’ is zero? If everything is so variable, then might communication turn out to be the claims person’s key skill?
The Automation of Fairness
As claims processes become increasingly automated, insurers will have to take care not to lose sight of their obligations in terms of the fairness of the decisions being made. Some insurers struggle with this even in today’s relatively straightforward workflow processes, so how they will cope with something like artificial intelligence is a concern. Experience points to this being harder as systems become more complex. A lot will depend on the extent to which those in oversight roles bring challenge and critical thinking to the implementation of such projects.
As claims processes become increasingly automated, should the claimant have the right to be told about this? There’s talk of news written by artificial intelligence ‘bots’ soon having to be flagged as ‘artificial news’. Might the same soon apply to individual decisions on things like claims? If so, then from a European perspective, a claimant’s ‘right to know’ might soon become a more complicated request to fulfill.
Upholding Supplier Standards
The consensus is that a typical claim function’s supply chain network will continue to grow for some time. And bringing in all of these exciting and new capabilities is fine, so long as everyone is singing the same tune. Insurers have to abide by the ethics of insurance claims, such as covered in rules for fairness, honesty and integrity. So how can a claims director convince her board of fellow directors that their firm’s ethical obligations are being met every bit as confidently as in more analogue times? Has her due diligence taken account of not just the intelligence and energy of those providers of artificial intelligence solutions, but their integrity as well? It’s a challenge best met earlier on.
That breed of policies described as ‘mobile, micro and moment’ are all about instant cover for just what you want, when you want it, for as long as you want it, and arranged with a few clicks on your phone. Turn those conveniences around and you have the potential for the instantaneous claim, perhaps only moments after inception – “I bought cover for a bike, got on it, went outside and crashed it”. Such claims have usually been looked upon with suspicion by claims people, on the basis that such a quick loss could not be fortuitous. Yet if you provide cover in this way, why shouldn’t some claims happen in much the same way? This is a change of mindset needed throughout an organization, not just in underwriting.
As more cogs, and more complicated cogs, are added to the overall claims process, the greater the challenge it becomes for them to deliver on the promises that were made at the planning stage. This is an existing problem for claims people in the UK, who have acknowledged that the multiplying layers of many claims systems aren’t delivering the expected results. The answer will not come from artificial intelligence working it out for itself: all AI needs to be trained on historical data. So claims people need to understand complexity and how to manage it.
Challenging the Decision
Research by one leading insurer in the UK market found that policyholders are less likely to trust an automated decision than one involving a human. So as claims become more automated, insurers could face an increasing number of challenges from individual claimants, asking how the decision on their claim was reached. How will they explain an output from an increasingly ‘black box’ process? They may be tempted to rely on generalized responses, but that isn’t going to work when the claimant appeals to an adjudication service like the UK’s Financial Ombudsman Service (FOS). Organisations like FOS should be working now on how they can get inside that automation and assess the fairness of the outcomes it has been designed to produce. Will they perhaps look to accredit the overall automation, or rely on case by case use of techniques like fairness data mining. Another factor that insurers need to take into account will be claimants turning to the EU’s General Data Protection Regulation and enforcing their right to access the data upon which the decision on their claim was made. Insurers will need to prepare for this, both in terms of the volume of such requests and the complexity of responding to them. Again, the ability of claims people to communicate complex things will become a key skill.
Provenance of Data
As insurers bring more and more data into their claims processes, especially unstructured data drawn from sources like social media, they will need to be prepared to demonstrate the provenance of that data. In other words, they need to be able answer questions like “where did you get that piece of data from that seems to have been a big influence on my claims decision?” Or “that piece of data is wrong so you need to change your decision.” If you utilize data outside of the context in which it was first disclosed, then the error rate shoots up. Just because a piece of data resides within a system doesn’t establish it as a fact.
Significance in Algorithms
Pulling all sorts of data together is one thing, but the value that claims people draw from all that data comes from the algorithms that weigh up its significance. At what levels the various measures of significance are set will be hugely important for the outcomes that claimants experience. These introduce options that require judgements and such judgements need to overtly account of ethical values like fairness and respect.
See also: How AI Will Transform Insurance Claims
Segmentation of Claimants
As claims processes become more automated, so claims people are presented with the opportunity to segment the experience of the various claimant types they engage with. Many insurers currently use software to assess claimants at the ‘first notification of loss’ stage and vary the type of experience they receive. At the moment, this is being used to address claims fraud, but it is unlikely to end there. Artificial intelligence coupled with audio and text analysis will allow insurers to segment non-fraud claimants for all sorts of purposes. The challenge for claims people is just how acceptable some of those purposes might be. For example, what if claimants are segmented according to the amount they are prepared to accept as a claims settlement? All of these new technology platforms introduce options, but just because you have the option to do something doesn’t mean that it’s a good thing.
New ways of communicating with policyholders offer up the possibility of advance warnings being given of storms, floods and the like. That brings many benefits to both insurer and policyholder, but it also raises the prospect of those warnings having conditions attached. Rather than advice, they could include requirements linked to continuation of certain elements of cover. If the policyholder doesn’t (for whatever reason) respond to those communications, this then introduces possible conflict zones for subsequent claims.
The Convenience of Clicking
The ease with which cover can be incepted using mobile devices is a great convenience to policyholders at the outset of a policy, but it could turn into a great inconvenience when making a claim. Research shows that we invariably do not read the terms and conditions presented to us when buying a mobile based product or service: it’s just too easy to click accept, especially when the fine print looks even finer on a small screen. So claims people need to be prepared for many more people than at present not knowing about the cover they’ve signed up to, beyond what is indicated by a few well designed icons on a screen.
The Language of Claims
A subtle change of language has emerged in claims circles in recent years. The service element of what’s on offer is being stressed more than the insurance element. While it’s great to see insurers now paying attention to risk management in their personal lines portfolios, this shouldn’t be at the cost of what is at the heart of an insurance product, which is risk transfer. The danger is that this slow and subtle change will not be picked up by customers until they find out when trying to claim that what they’ve bought is largely a service and not insurance.
To conclude. It’s a great time to be in insurance, and I would say even more so in respect of claims, for that is where all the promises inherent in the insurance purchase are fulfilled. Those who recognize the ethics of insurance claims and rise to the challenges outlined above will be those who are trusted in the digital market.