Tag Archives: big data

Challenges Remain on Use of Data, Analytics

As insurance companies look to optimize performance, mitigate risk and meet rising consumer expectations, they still face a plethora of challenges when it comes to data and analytics. Companies continue to aggregate more and more data – but the manner in which they are doing so is not necessarily efficient. Some 40% to 50% of analysts spend their time wrangling the data, rather than finding meaningful insights.

To address these operational inefficiencies, TransUnion commissioned Aite Group to conduct a study of insurance and financial services professionals. The findings from this study outline how companies can stay competitive in the insurance industry while adapting to the evolving world of data and analytics.

Like most established financial institutions, insurance companies have multiple data repositories across the organization. Individual business units own their respective processes for capturing and managing data and, more often than not, manage at the product level rather than at the customer level. This often leads to inconsistencies, with no set definitions of key terms such as “customer.” As a result, information and insights are isolated to silos – by lines of business or by product – creating barriers toward seamless data integration.

To maintain a competitive edge, insurance companies recognize the need for new data sources. More than half of the study’s respondents plan to increase spending on most types of data sources, especially newer ones, such as mobile. However, as big data gets even bigger, it becomes increasingly difficult for analytics executives to find valuable insights. Addressing the challenges that arise from big data volumes requires an enterprise data management strategy as well as an investment in the proper analytics tools and platforms for processing and analyzing the data for meaningful insights.  

The majority of these institutions are currently grappling with fractured data and legacy systems, which prevents these companies from extracting value and making the data actionable. 70% of those surveyed indicated that a single analytics platform, one that coordinates and connects internal and third-party systems, is a major differentiator. However, only about two in 10 respondents indicated that their current solutions have these capabilities.

This highlights the need for a coherent enterprise data and analytics strategy and a common platform to hold and integrate existing and new data sources, as well as analytical tools. The platform needs to be flexible to support different skill sets, react to changing market conditions and have the ability to integrate alternative sources of data.

See also: Why to Refocus on Data and Analytics  

In addition to leveraging the right tools, sourcing the right talent remains a key challenge for executives. Nearly half (45%) of insurance professionals indicate that having the right talent greatly improves their ability to underwrite profitable policies. However, due to a lack of bandwidth, insurance companies often do not have the resources to allow their analytics teams to stretch their analytics creativity. 

These operational challenges can result in a significant amount of time being dedicated to cleansing and prepping the data – preventing analytical teams from performing more valuable activities such as model development. The operational challenges create an obstacle for retaining talent as these sought-after data scientists are instead assigned to trivial work. 42% of the insurance professionals surveyed indicated that it is also challenging to find qualified data scientists in the first place. 

As the use of descriptive, prescription and predictive analytics gains traction, it is imperative that executives recognize the challenges and explore solutions. By overcoming these barriers, the industry will be better prepared to embark on the next frontier of data and analytics.

For more information about the TransUnion/Aite Group study, please visit the “Drowning in Data: Thirsty for Insights” landing page.

3 Big Challenges on the Way to Nirvana

We hear almost daily how insurtech is disrupting the once-staid insurance industry. The main ingredients are big data, artificial intelligence, social media, chatbots, the Internet of Things and wearables. The industry is responding to changing markets, technology, legislation and new insurance regulation.

I believe insurtech is more collaborative than disruptive. There are many ways insurance technology can streamline and improve current processes with digital transformation. Cognitive computing, a technology that is designed to mimic human intelligence, will have an immense impact. The 2016 IBM Institute for Business Value survey revealed that 90% of outperforming insurers say they believe cognitive technologies will have a big effect on their revenue models.

The ability of cognitive technologies, including artificial intelligence, to handle structured and unstructured data in meaningful ways will create entirely new business processes and operations. Already, chatbots like Alegeus’s “Emma,” a virtual assistant that can answer questions about FSAs, HSAs and HRAs, and USAA’s “Nina” are at work helping policyholders. These technologies aim to promote not hamper progress, but strategies for assimilating these new “employees” into operations will be essential to their success.
Managing the flood of data is another major challenge. Using all sorts of data in new, creative ways underlies insurtech. Big data is enormous and growing in bulk every day. Wearables, for instance, are providing health insurers with valuable data. Insurers will need to adopt best practices to use data for quoting individual and group policies, setting premiums, reducing fraud and targeting key markets.

See also: Has a New Insurtech Theme Emerged?  

Innovative ways to use data are already transforming the way carriers are doing business. One example is how blocks of group insurance business are rated. Normally, census data for each employee group must be imported by the insurer to rate and quote, but that’s changing. Now, groups of clients can be blocked together based on shared business factors and then rated and quoted by the experience of the group for more accurate and flexible rating.

Cognitive computing can also make big data manageable. Ensuring IT goals link back to business strategy will help keep projects focused. But simply getting started is probably the most important thing.

With cognitive computing, systems require time to build their capacity to handle scenarios and situations. In essence, systems will have to evolve through learning to a level of intelligence that will support more complex business functions.

Establishing effective data exchange standards also remains a big challenge. Data exchange standards should encompass data aggregation, format and translation and frequency of delivery.
Without standards, chaos can develop, and costs can ratchet up. Although there has been traction in the property and casualty industry with ACORD standards, data-exchange standards for group insurance have not become universal.

See also: Insurtech’s Approach to the Gig Economy  

The future is bright for insurers that place value on innovating with digital technologies and define best practices around their use. It’s no longer a matter of when insurance carriers will begin to use cognitive computing, big data and data standards, but how.

A Scary Future for Life Insurance?

Web users, especially business owners, already have plenty of good reasons to be careful with what they put online. Shifts in public perception, the increasing threat of data leaks and continual attempts to steal your identity might be enough. However, new state rules for New York’s insurance companies could highlight another worrying trend. What you post could affect your premiums.

It’s already legal for insurance companies, including life insurance and business protection insurance providers, to use public data to decide what you pay. From credit scores to court records and now including your Twitter feed, they can effectively use nearly anything they want to set insurance prices.

Now, however, New York is taking a bold step forward as the first step to codify the practice. Discrimination by race, sexual orientation, faith and other protected classes is still illegal, but the use of personal data to inform insurance decisions is a trend that many are worried other states will follow.

See also: New Efficiencies in Life Insurance  

Your data is just another way for insurance companies to measure your risk and make more efficient decisions. Regulations are designed to keep the needs of the companies and their customers both satisfied, but many are concerned that it’s just giving the providers license to be more invasive when deciding premium rates. Your rates aren’t only decided by what information you fill out; examinations are reaching further and deeper into our data than ever.

The automation of the industry is making it easier to collect and collate data from many sources, but there’s always a human involved in the judgment, and many are concerned that business protection and life insurance providers expose too much.

Social media use in setting insurance premiums isn’t commonplace, yet. Only one of 160 insurers in New York use it, but “big data” is spreading across industries, showing the power of using data from diverse sources. At the moment, social media is used to determine falsehoods in applications, but there’s no reason it can’t be used in ways that customers might consider more invasive. And while discrimination is prohibited, some fear there’s nothing to stop providers from doing deeper dives. In many cases, the deeper you look into anyone, the more likely you are to uncover something that could be used to raise their premiums.

Algorithms may seem impartial, but they are designed by humans with all of their own biases. One textbook example is COMPAS, which predicted where crime would occur based on criminal justice data from the U.S. The tool vastly overestimated rates of recidivism for black defendants while underestimating the same risk for white defendants.

This trend of using social media data might not be widespread just yet, but there are justified fears that social media surveillance and investigation will become more common as reliance on the technology spreads. As such, it may be even harder for customers to see what affects their premiums, as much of it could be determined by big data gathering information from dozens of sources and obscure algorithms used to highlight risk factors.

This risk of surveillance, even if it has no application in reality, affects how we use the internet. A trend toward “deleting Facebook” arose shortly after its sizable data breach last year. Data-sharing from sites and businesses of all kinds has seen use of virtual private networks (VPNs) skyrocketing. This might seem prudent, at first, but if our social media use is being so closely monitored, then we’re less likely to use those platforms to talk and associate freely.

The issue isn’t just in the data we share, but also the data we consume. If a business protection insurance provider looks at who you follow on Instagram, what’s to stop it from deciding premiums based on whether you follow high-risk individuals, even if you are not a high-risk individual yourself? The same goes for health and life insurance companies, which could raise premiums because someone is seen as a higher risk because they are part of suicide prevention groups on Facebook.

Business are already under great scrutiny for their social media, mostly by customers, which is justifiable. However, when it comes to business protection insurance and key man insurance, the premiums for protecting the people and assets most important to your business’s growth could be rising for reasons that are more obscure than most will be able to work out. We don’t know how far into your posting history insurance providers can go in their search for data, so it’s best to create a strong social media policy as soon as possible.

The law is always slow to catch up on technology. While many fear that the wheels may not turn in time for smart, context-driven regulation, other solutions are being looked for. Some want broad restrictions on the ability of insurance providers to use public information, while others are fighting for great transparency. Some consider it of utmost important that insurance companies be clear with what data drives their premium setting, as well as when new algorithms and data sources are used to adjust them.

See also: How to Resuscitate Life Insurance 

However, insurance companies have a vested interest in protecting their algorithms and how, exactly, they find their premiums. Protection of trade secrets and other intellectual property is part of what keeps them competitive. Furthermore, if the widespread ignoring of terms and conditions on the internet shows anything, it’s that notices of new algorithms may not register with the majority of customers. Most people simply don’t understand the technology that could be used against them.

More detailed regulations, such as a need for algorithmic impact assessment, are looked at as another potential solution. In answering questions that find out the data that insurance providers use, why they use it, what they test and whether they have tested the system for bias, discrimination could be halted in its tracks. The insurance industry and its customers rely on the ability to use the data available to set premiums based on risk level. However, the threat of discrimination is driving concerns.

Setting Goals for Analytics Leaders

For the last couple of years, I’ve shared a post recommending a system for setting goals and achieving them. However, a few conversations with insight leaders have reminded me that advice remains generic. What about which goals to set?

As this blog aims to support customer insight leaders, I want to also offer more specific advice.

Given the context of common challenges and potential future trends, which goals would I advise? Well, far be it from me to second guess your priorities and specific context, but I hope these thoughts help. They are intended to simply act as a checklist, to prompt your own thinking.

Topics for your specific goals

Business priorities

My first encouragement is to be guided by your context. Do not start with fashionable technology trends or the most passionate speaker at that conference. What does your business need? What do your customers want?

Start by taking some time out to consider the most important challenges for your business. Here are a few potential issues to seed your review:

Identifying the highest business priority that customer insight can guide is a great place to start. As I advised when sharing experience of how to influence “top table” executive committees, start with their need. Even if other improvements are possible and more interesting, start with how analytics or research can help the wider business.

That will build the firmest foundation for influence.

Having said that, many of today’s insight leaders have to build a capability, whether it be improved data usage, analytics or data science. So, which goals make sense for them?

Capability building goals

Data Management

First, because almost no business has yet achieved full compliance, I must stress the importance of GDPR compliance.

To help identify which specific goals you need to set regarding GDPR, a review of these previous posts should help identify gaps:

See also: How to Keep Goals From Blowing Up  

For now, I would suggest that goals with regard to using more big data (unless it is to improve your data quality) should be postponed. Until you clearly understand how you will achieve compliance with GDPR and can evidence a plan, that should be your data priority, not least because, once you fully understand your responsibilities, less may be more, for data usage.

Data science capability

The single most popular capability that today’s leaders are piloting is data science (including AI). That makes sense, as even the more advanced leaders are still exploring potential applications.

Some new products and services have been developed. Existing processes have been refined and automated. But the business case for most organization is still far from proven.

My personal view is that is most companies do not yet need data scientists; rather, better analytics would add more value. However, as coding languages become simpler and the most popular algorithms prove their relevance, that may change.

So, even if you are not a tech disruptor, if you can secure sufficient budget, now is a good time to experiment. I would simply caution to set a goal regarding proving business applications and ROI, at low cost and low risk for now.

Here are some posts to help guide where you might focus a goal to pilot a data science capability in your business:

Analytics capability

For many businesses, the capability with greatest potential to change how they operate is analytics. Unfortunately, the term has too often been misunderstood and either watered down or hijacked.

By watered down, I mean conflating business intelligence (BI) with analytics. Because of the widespread, vague use of the term, I come across many businesses that believe they have an analytics team. Upon closer inspection, I find this team are only skilled in producing BI reporting.

If educating your business on the difference between analytics and BI is one of your challenges, consider presenting a continuum. I’ve used a number of infographics, over the years, to show a maturity journey from simple data reporting through to data science. This can help show the difference compared with descriptive, predictive or prescriptive analytics. Do you need to set a goal to expand your analytics capability toolkit?

With so much hyperbole surrounding data science, it is all too often allowed to subsume all analytics. I’ve met a number of leaders who assume any statistical modeling is now part of a data science capability.

For one goal, I’d suggest identifying where your analytics capability can most rapidly improve ROI. These posts may help guide your goal setting:

People capability

People are key. Often, the biggest predictor of impact is not the sophistication or even the relevance of analytics work, but the analyst.

All too often, I find that analysts lack any training beyond technical skills. It is as if they are simply to be programmed with coding/software/stats skills and left to get on with it. When I see what a difference strong softer skills can make to individual analysts and teams, this is such a missed opportunity.

So, I encourage you, consider the people skills that you should target with relevant goals.

For developing individual analysts, I suggest considering:

For designing and developing better teams, I suggest:

Leadership capability

Last, but definitely not least, don’t neglect yourself as a leader. Rather than letting a personal development plan be a burden or afterthought, how about seeing it as a chance to invest in yourself?

I’ve written previously about the need for improved leadership capability among insight leaders. More organizations are waking up to this development need.

See also: 3 Steps to Succeed at Open Innovation  

Two regular conversations remind me of the continued importance of setting goals in this area. First, I meet (and sometimes coach) leaders who have technical expertise but lack experience operating at a senior level. Second, busy insight leaders tell me they cannot spare the time for coaching or mentoring despite obvious challenges.

If you recognize that you’d benefit from more investment in your leadership development this year, here are some posts on leadership development that should prompt your thinking, to craft a goal that is right for you:

What will your specific goals be?

Did you find those suggestions useful? Which were relevant to the goals you need to set? What specific goals are your top priorities?

Understanding New Generations of Data

To effectively acquire customers, offer personalized products and provide seamless service requires careful analysis of data from which insights can be drawn. Yet executives cite data quality (or lack thereof) as the chief challenge to their effective use of analytics. (Insurance Nexus’ Advanced Analytics and AI survey).

This may, in part, be due to the evolving nature of data and our understanding of how its changing qualities affect how we use it — as technology changes and different data sources emerge, the characteristics of data evolve.

More data is all well and good, but more isn’t simply…more. As new and more contextual streams of data have become available to insurance organizations, more robust and potent analytical insights can be drawn, carrying with them huge implications for insurance as a whole.

See also: Data, Analytics and the Next Generation of Underwriting  

Insurance Nexus spoke to three insurance data experts, Aviad Pinkovezky (head of product, Hippo Insurance), Jerry Gupta (director of group strategy, Swiss Re Management (US)) and Eugene Wen (vice president, group advanced analytics, Manulife), for their perspectives on what each generation of data means for the insurance organization of today, and how subsequent generations will affect the industry tomorrow.

See full whitepaper here.

While there is disagreement regarding which generational bucket data should fall into, current categorizations appear to be largely aligned. Internal, proprietary data is generally agreed to form first-generation data, with the second-generation comprising telematics and tracking device data. There is some contention over the categorization of third-party data, but these are largely academic distinctions.

Experts agree that we are witnessing the arrival of a new classification of data: third-generation. As Internet of Things (IoT) data becomes more commonplace, its incorporation with structured and unstructured data from social media, connected devices, web and mobile will constitute a potentially far more insightful kind of data.

While this is certainly on the horizon, and has been successfully deployed with vehicular telematics, using “IoT, including wearables, in the personal lines space [and elsewhere], is still not widely adopted,” says Jerry Gupta, senior vice president, digital catalyst, Swiss Re. Yet, he is confident that third-generation data will “be the next wave of really big data that we will see. Wearables will have a particular relevance to life and health products as one could collect lot of health-related data.”

Download the full whitepaper to get more insights.

Despite this promise, there are significant roadblocks to effectively leverage third-generation data. According to Aviad Pinkovezky, head of product at Hippo Insurance, the chief problem is one of vastly increased complexity: “This sort of data is created on demand and is based on the analysis of millions of different data points…algorithms aren’t just generating more data streams, they are taking new data, making decisions and applying them.” Clearly, this requires a change in how data is handled, stored and analyzed. Most significantly, third-generation data has the potential to change the nature of insurance.

See also: 10 Trends on Big Data, Advanced Analytics  

Given that data is no longer the limiting factor for insurance organizations, our research suggested five areas on which insurance carriers should focus to turn data into real-time, data-driven segmentation and personalization: cost, technical ability, compliance, legacy systems and strategic vision.

A challenge, certainly, but the potential rewards to both insurance carrier and insureds are hugely promising, especially the change in relationship between carrier and insured. The potential to not only predict, but mitigate, risk has huge implications for insurance.

Efficient, accurate and automated data gathering is a clear benefit for insurance carriers, and the potential to provide value-added services (by mitigating risk altogether) greatly enhances their role in the eyes of the customer. Measures that reduce risk to the insured increase trust and strengthen the bond between the carrier and the insured. Customers are less likely to view insurance as a service they hope to never use but, rather, a valuable partner in keeping themselves secure, both materially and financially.

The whitepaper, “Building the Customer-Focused Carrier of the Future with Next-Generation Data,” was created in association with Insurance Nexus’ sixth annual Insurance AI and Analytics USA Summit, taking place May 2-3, 2019, at the Renaissance Downtown Hotel in Chicago. Expecting more than 450 senior attendees from across analytics and business leadership teams, the event will explore how insurance carriers can harness AI and advanced analytics to meet increasing customer demands, optimize operations and improve profitability. For more information, please visit the website.