August 16, 2019
A Scary Future for Life Insurance?
Social media, currently checked to find falsehoods in applications, could be used in ways that customers might consider far more invasive.
Web users, especially business owners, already have plenty of good reasons to be careful with what they put online. Shifts in public perception, the increasing threat of data leaks and continual attempts to steal your identity might be enough. However, new state rules for New York’s insurance companies could highlight another worrying trend. What you post could affect your premiums.
It’s already legal for insurance companies, including life insurance and business protection insurance providers, to use public data to decide what you pay. From credit scores to court records and now including your Twitter feed, they can effectively use nearly anything they want to set insurance prices.
Now, however, New York is taking a bold step forward as the first step to codify the practice. Discrimination by race, sexual orientation, faith and other protected classes is still illegal, but the use of personal data to inform insurance decisions is a trend that many are worried other states will follow.
See also: New Efficiencies in Life Insurance
Your data is just another way for insurance companies to measure your risk and make more efficient decisions. Regulations are designed to keep the needs of the companies and their customers both satisfied, but many are concerned that it’s just giving the providers license to be more invasive when deciding premium rates. Your rates aren’t only decided by what information you fill out; examinations are reaching further and deeper into our data than ever.
The automation of the industry is making it easier to collect and collate data from many sources, but there’s always a human involved in the judgment, and many are concerned that business protection and life insurance providers expose too much.
Social media use in setting insurance premiums isn’t commonplace, yet. Only one of 160 insurers in New York use it, but “big data” is spreading across industries, showing the power of using data from diverse sources. At the moment, social media is used to determine falsehoods in applications, but there’s no reason it can’t be used in ways that customers might consider more invasive. And while discrimination is prohibited, some fear there’s nothing to stop providers from doing deeper dives. In many cases, the deeper you look into anyone, the more likely you are to uncover something that could be used to raise their premiums.
Algorithms may seem impartial, but they are designed by humans with all of their own biases. One textbook example is COMPAS, which predicted where crime would occur based on criminal justice data from the U.S. The tool vastly overestimated rates of recidivism for black defendants while underestimating the same risk for white defendants.
This trend of using social media data might not be widespread just yet, but there are justified fears that social media surveillance and investigation will become more common as reliance on the technology spreads. As such, it may be even harder for customers to see what affects their premiums, as much of it could be determined by big data gathering information from dozens of sources and obscure algorithms used to highlight risk factors.
This risk of surveillance, even if it has no application in reality, affects how we use the internet. A trend toward “deleting Facebook” arose shortly after its sizable data breach last year. Data-sharing from sites and businesses of all kinds has seen use of virtual private networks (VPNs) skyrocketing. This might seem prudent, at first, but if our social media use is being so closely monitored, then we’re less likely to use those platforms to talk and associate freely.
The issue isn’t just in the data we share, but also the data we consume. If a business protection insurance provider looks at who you follow on Instagram, what’s to stop it from deciding premiums based on whether you follow high-risk individuals, even if you are not a high-risk individual yourself? The same goes for health and life insurance companies, which could raise premiums because someone is seen as a higher risk because they are part of suicide prevention groups on Facebook.
Business are already under great scrutiny for their social media, mostly by customers, which is justifiable. However, when it comes to business protection insurance and key man insurance, the premiums for protecting the people and assets most important to your business’s growth could be rising for reasons that are more obscure than most will be able to work out. We don’t know how far into your posting history insurance providers can go in their search for data, so it’s best to create a strong social media policy as soon as possible.
The law is always slow to catch up on technology. While many fear that the wheels may not turn in time for smart, context-driven regulation, other solutions are being looked for. Some want broad restrictions on the ability of insurance providers to use public information, while others are fighting for great transparency. Some consider it of utmost important that insurance companies be clear with what data drives their premium setting, as well as when new algorithms and data sources are used to adjust them.
See also: How to Resuscitate Life Insurance
However, insurance companies have a vested interest in protecting their algorithms and how, exactly, they find their premiums. Protection of trade secrets and other intellectual property is part of what keeps them competitive. Furthermore, if the widespread ignoring of terms and conditions on the internet shows anything, it’s that notices of new algorithms may not register with the majority of customers. Most people simply don’t understand the technology that could be used against them.
More detailed regulations, such as a need for algorithmic impact assessment, are looked at as another potential solution. In answering questions that find out the data that insurance providers use, why they use it, what they test and whether they have tested the system for bias, discrimination could be halted in its tracks. The insurance industry and its customers rely on the ability to use the data available to set premiums based on risk level. However, the threat of discrimination is driving concerns.