Download

Why Insurtech Funding Dried Up

InsTech CEO Matthew Grant says insurtechs' ideas were often "No. 11 on a board's list of top 10 priorities," and "change is hard" in insurance.

matthew grant

 

matthew grant headshot

Matthew Grant has been building and advising companies on the innovative use of technology, data and analytics for over 30 years. With a degree in mechanical engineering, Matthew has been involved in the entire cycle of risk management, insurance and capital management. He was responsible for launching catastrophe modeling into Europe in 1992 and was on the executive team of Risk Management Solutions (RMS) for eight years. Over his career at RMS from 1996 to 2016, his roles included running global sales and marketing, leading the product team and developing emerging market solutions, including catastrophe bonds and parametric products.

Matthew joined InsTech in 2016 and is the CEO and co-owner. InsTech currently works with over 170 companies in insurance and technology that are supporting innovation, and the InsTech team provides advice, events, content and introductions. InsTech engaged with over 100,000 people around the world in 2023.

Matthew hosts the weekly InsTech podcasts, writes regularly on the topics and themes of relevance to insurance and risk management, and parametric insurance is a major theme at this time. Matthew speaks frequently at InsTech events and other industry conferences. Matthew is an angel investor in U.K.-based technology companies supporting insurance and financial services.


Paul Carroll

When insurtech became a trend, people initially talked widespread disruption. “Amazon is going to come in and kill all the insurers”—that sort of thing. People seem to have moved past that, and we’re now in Insurtech 2.0 or even 3.0. To start us off, how would you broadly characterize the sweep of insurtech over the last 10 years or so?

Matthew Grant

It has occurred to me that innovation is what you do when nothing else works.

People can't do thinking incrementally. And unless you've got some big external force driving change, it's almost impossible to come along and say, Here's a new idea.

We're not going to change the world if people don't have a compelling need to change.

I am careful about using the term “insurtech” because it’s imprecise, and it can exclude companies that have been around for more than a decade. Many well-established technology companies are still innovating and successfully releasing new products.

Paul Carroll

Where do we go with funding from here? Everything was up, up, up. And then for the last year and a half or more, funding has trailed off. Do you see the big funding days ever coming back? Or do you see us moving into a new phase in terms of how people fund innovation?

Matthew Grant

If you look at the unicorns, what drove the crazy valuations was this rush to IPOs [initial public offerings], a long way from any conventional earnings multiples. What we’re seeing now is much more of a return to a traditional EBITDA model. We are also seeing more backing by private equity that needs to make short-term returns to fund its acquisition debt.

Things are slightly different in the U.S. and Europe, because Europe is always a little bit more traditional. The survivors among the start-ups are the companies that typically were started by people in the industry, which are taking a more old-fashioned approach. They get customers, generate revenue, start to make a profit and THEN go out and look for investment. At this later stage, they can bring a decent investment, and they use that to grow the business.

But it’s very difficult now to get investment based on a valuation of the multiples of earnings, or even revenue, that were common a couple of years ago. VCs [venture capitalists] and PE [private equity] still have to deploy their funds somewhere, so it’s not drying up totally. But high interest rates complicate things, and we’re coming back to a greater level of scrutiny.

Paul Carroll

Yeah, I've seen this before, in particular in the internet days. People get excited by the possibilities of growth and convince themselves that trees can grow to the sky. Then reality sets in, and investors start to demand profits and cash flow.

Matthew Grant

I also think there’s a key difference between insurtech and the rest of fintech. In the banking world, we make decisions much more frequently than we do in insurance. In banking, someone might make a decision a few times a day. If you’re a trader, actions happen in milliseconds. But insurance contracts come once a year. So you can live with a lot more of a clunky process.

When you look at the thousands of insurtech companies that we’ve started up, most of them address problems that are No. 11 on a board’s list of top 10 priorities. The insurtechs have nice ideas, and you can see how they’d make some money, but is a company really going to make the effort to bring in the insurtech’s product or service if there isn’t a burning problem for them that demands change now?

Paul Carroll

Back in the early internet days, I, and others, wrote favorably about what we called the arms suppliers. Lots of companies tried to rewrite the rules of commerce, and they generally failed. But a company like Sun, which sold lots and lots of servers to the companies trying to rewrite the rules of commerce, thrived. Do you see that sort of trend with insurtechs?

Matthew Grant

If you look at the technology adoption curve, there's only ever going to be a small number of insurance companies that are willing to work with new technology and new companies and be comfortable with the pain of onboarding. The industry is still working through that stage of “early adopters” before we move to the “early majority.”

What is really interesting that’s happening here in the U.K.—I think more than in the U.S.—is that companies that started up eight to 10 years ago have proved themselves to some insurance companies and are being trusted to do more across their client company to make other processes smoother. And start-ups now don’t just collaborate with the insurer, they collaborate with each other, too. You get this collaboration of the willing, where they cross-fertilize each other, and you then get a multiplier effect of benefit to all

Some established companies like Guidewire and RMS are starting to open up their environments to anyone they think is strong enough technically to connect in.

Paul Carroll

This has been great. Any final thoughts?

Matthew Grant

If you look at Lloyd’s, which is intended to be the insurer of last resort, there are a few isolated cases of people doing interesting things with unusual risks, but there’s not this wholesale adoption of new ideas. It's more about following the business where rates have gone up. If rates are up 10% for reinsurance, they will want to go and write more reinsurance. Rates are going up for cyber, so write more cyber.

I think the insurtech vision dried out because there wasn't a big enough appetite on the receiving end, and change is hard.. Parametric is an example of a good idea of changing how insurance is offered, and of a solution where traditional insurance isn’t available. There are an increasing number of companies in this space, but only a few have been very successful.

Whether that’s a warning or not, we can't just blame the disruptors for being overly enthusiastic. From a commercial point of view, the insurance market has to overcome a lot of friction to do things differently.

Paul Carroll

Thanks, Matthew. I always feel smarter after we talk.


Insurance Thought Leadership

Profile picture for user Insurance Thought Leadership

Insurance Thought Leadership

Insurance Thought Leadership (ITL) delivers engaging, informative articles from our global network of thought leaders and decision makers. Their insights are transforming the insurance and risk management marketplace through knowledge sharing, big ideas on a wide variety of topics, and lessons learned through real-life applications of innovative technology.

We also connect our network of authors and readers in ways that help them uncover opportunities and that lead to innovation and strategic advantage.

4 Challenges in Adopting Broker Technology

Change management will become a critical capability for insurance agencies, as resisting technological innovation can cost companies millions

Person in Green and White Polka Dot Long Sleeve Shirt Writing on White Paper

KEY TAKEAWAYS:

--The four main challenges agencies face in getting new technology adopted are: lack of incentive and motivation; too little time and capacity; a feeling by agents that they are unready; and not enough emotional and informational support.

--Three steps can overcome the obstacles: Identify a champion, define the value and measure the results.

----------

With more insurance agencies ready to lean into this digital transformation era, brokers need to embrace new technology to succeed.  

However, change is hard: So hard that the same framework for modeling the 5 Stages of Grief are used by modern change management professionals to facilitate organizational change. And the path to accepting change is represented perfectly by the Kübler-Ross change curve. This model sets out the different stages of change: shock, denial, frustration, depression, experimentation, decision and integration. While the stages aren't linear, they set out how any significant and disruptive alteration will always rock the boat—but doesn't need to be the end of the world. 

These recognized stages also mirror the four main challenges I’ve experienced as an insurtech owner and change advocate. Change management will become a critical capability for insurance agencies, as resisting technological innovation can cost companies millions, so let's look at the four main challenges to technology adoption and see how agencies can overcome them. 

The four challenges

1. Lack of incentive and motivation 

It's unlikely that brokers will feel motivated if the benefits of technological changes aren't personally relevant to them. If the brokers don’t internalize how new technology will simplify their day, like reducing manual processing or making it easier to delight their clients—helping them achieve their bonuses—brokers may struggle to see the transition's value. 

The fear of the unknown can also be a powerful demotivator; brokers will likely hesitate to embrace new technologies if they're uncertain about how these tools will affect their daily tasks, client relationships and overall job satisfaction.

If brokers hit a rough patch in technology adoption, they’ll likely want to throw the towel in—who can blame them?

2. Reduced time and capacity 

Like everything, all good things take time. However, the multifaceted nature of a broker's role often involves balancing priorities simultaneously, so it's common for brokers to need help making time for it. 

Furthermore, integrating a new technology can disrupt established workflows and operational processes. In the beginning, brokers will be adding another system to an already complex workflow; their understanding of the desired outcome will keep the business running as usual, but doing it in a new way can be initially unsettling. This type of change and uncertainty often leaves little time for in-depth exploration and integration of a new technology, leading to a discomfort and a perpetual cycle of deferred adoption.

Put simply, brokers need to allocate time to learn how to use the new technology or plan time for it because they already have a day job, which is their priority. Learning something would need to fit into the gaps in their schedule unless there is explicit time planned for it.  

See also: The Next Generation of Talent

3. Feeling unready and not prepared 

The speed at which technology advances can leave brokers feeling like they’re in a constant game of catch-up. New tools, software and platforms continually emerge, creating a dynamic landscape that demands adaptation. It's no wonder this can make brokers feel unprepared. 

Although some insurance executives may disagree, before you throw brokers in at the deep end, you need to teach them how to swim. Brokers may not have received adequate training on the latest digital tools, analytics platforms or AI applications, leaving them ill-equipped to navigate the evolving technological terrain. 

4. Lack of emotional and informational support 

In a nutshell, brokers need somewhere to ask questions (or vent) and get feedback when they're asked to adapt to technological change.

There exists a palpable fear among some brokers that embracing technology might make certain aspects of their roles redundant, evoking concerns about job security. In addition, when a solution automates a multi-step process that a broker could previously validate along the way, there is an inevitable sense of uncertainty about the quality of the output. Without emotional support from leadership and colleagues, this fear can escalate, leading to decreased morale and productivity. 

So, as you can imagine, when brokers don't know what to expect, the natural reaction is to push back against the change.

Three steps to aid technology adoption

As Albert Einstein wisely said, "The measure of intelligence is the ability to change." In other words, only forward-thinking agencies will succeed in this competitive market.  

But adopting a transparent change management system might be all you need to overcome the four hurdles listed above, and it can be as simple as following these three steps.

1. Identify a champion

Every story needs a hero, and that's no different when it comes to change. Insurance agencies need to find someone internally to be accountable for the rollout. This person understands and cares about the problem and wants to advocate for the proposed technology adoption

Agencies need to let everyone know who is leading the charge. But they also need to provide incentives for their champion: Determine what they want, and give it to them if they do well.  

Gartner found that when employees primarily own implementation planning, the success of the change increases by 24%. So identifying a primary candidate who can coordinate with their colleagues and departments, ensuring everyone is heard and catered to, can massively increase the change adoption's chances of success. 

By simply giving the technology change a human face, this can help those who need extra support, as they know the person spearheading the charge and can request assistance.

See also: Why Brokers Should Embrace AI

2. Define the value

From the start, it needs to be clear: What benefit does the technology give each individual broker? What advantage does it provide the company? And how will it help the organization’s bigger picture? 

To answer these questions, management must decide what metrics to use to determine if the technology delivers value. 

Defining the value can also help companies decide if they want to roll out their technology change across all departments or if phasing it in one by one is more suitable. Clarifying this can help overcome the hurdles of motivation and readiness, as brokers clearly understand how this new technology will benefit them and how it will be measured.

Venn Diagram
The ideal champion is someone who cares about the company and understands the benefit of the technology to both the individual and the company

3. Measure the results

It's essential to start with the end in mind and ensure you have an established method to measure the expected value. One essential metric to measure is employee adoption, as knowing this helps predict the change's potential longevity. 

Whether it’s policy processing time, renewal rate, employee adoption or agent productivity, by measuring key metrics regularly, brokers can keep their change management plan on track. It is necessary to check in often with the champion and employees to ensure that measurements are correct and that the technology adoption is successful and aiding brokers in their daily lives.  

Wrapping up

Nobody ever said change was easy, but that doesn't mean it needs to be a traumatic uphill battle. By recognizing broker pain points and using the three steps to overcome them when implementing the tech-related change, brokers can feel understood and heard while accepting and appreciating how the new tech is here to serve them.


Jason Keck

Profile picture for user JasonKeck

Jason Keck

Jason Keck is the founder and CEO of Broker Buddha, which transforms the application and renewal process to make agencies far more efficient and profitable.

He is a seasoned technology entrepreneur and brings 20 years of experience across digital and mobile platforms to the insurance industry. Before founding Broker Buddha, Keck led business development teams at industry unicorns, including Shazam and Tumblr.

A Harvard graduate with a degree in computer science, Keck also worked at Accenture and Nextel.

What’s Climate Change Got to Do With Life Insurance?

Climate change can cause extreme temperatures, air pollution, infectious diseases and mental health issues, all of which endanger lives. 

Electric Towers during Golden Hour

KEY TAKEAWAY:

--There are four key elements to consider when evaluating and predicting how climate change affects life and health insurance: types of insurance product, demographics of insureds, their location and their ability to adapt.

----------

2023 was a turbulent year for us in the insurance industry in many aspects. Risks continued to manifest worldwide, especially those from the geopolitical, economic and natural catastrophe landscape. On many fronts, this trend is expected to continue in 2024, especially for risks related to climate change.

In the U.S., observers point to effects from extreme heat, drought, flooding and catastrophic wildfires striking throughout the nation, widely and deeply affecting lives and livelihoods.

Life and health insurers have a naturally long-term perspective, and their interests are aligned with those of the people they protect, with a common interest in everyone living a long and healthy life. It is from this viewpoint that they need to understand the relevance of climate change to their insureds’ mortality and morbidity.

Compared with a decade ago, more insurers are taking this subject into account and have new knowledge and tools to help transform gained insights into strategic decisions.

See also: Making Life Settlements More Transparent

Major routes of impact 

We have identified the following four relevant consequences of climate change on human health, which are expected to have significant impacts on the life and health insurance industry:  

  1. Extreme temperatures. Unusually high and low temperatures, exacerbated by climate change, disrupt ecosystems and challenge the bodies of humans and livestock. Extreme temperatures, for example, increase the risk of cardiovascular, respiratory and cerebrovascular diseases and worsen chronic conditions such as diabetes and kidney diseases. Whether the expected gains from fewer cold-related deaths will balance the losses from more heat-related fatalities and medical conditions depends on the geographic location under consideration. Adaptation will be the largest variable determining the long-term impact.
  2. Air pollution. Climate change — along with industrial activity, transportation and traffic — causes air pollution. Wildfires and dust storms expose people to additional loads of particulate matter, exacerbating asthma and chronic obstructive pulmonary disease. Increased pollen exposure driven by climate change also triggers allergic reactions and lung diseases. Likewise, indoor air quality is a major health issue and is influenced, for instance, by heavy rainfall and floods.
  3. Infectious diseases. Climate change leads to shifts in ecological conditions that, in turn, trigger changes in the spatiotemporal (location and time) distribution of pathogens, parasites and diseases. Many of those are vector-borne, that is, carried by intermediate hosts such as ticks or mosquitos. Among the transmitted diseases are Lyme disease, malaria, Zika virus fever and dengue fever, which all surged in recent years. The story is complicated, as overall warming is only one influence among many. Others include rainfall patterns, urbanization and human mobility, as well as increased interactions between humans and wildlife.
  4. Mental health. Climate change not only affects human bodies but also affects our minds. Coping with the consequences of changes in our environment takes its toll: Studies have shown that hotter temperatures are associated with mood disorders, substance use disorders, anxiety stress disorders, schizophrenia and self-harm. Living through a natural catastrophe or extreme weather event can cause post-traumatic stress disorder. In addition, poor mental health can harm the immune system. Although the magnitude of the impact of climate change-related mental health conditions is yet to be determined, it could become a major concern for both mortality and morbidity if not properly addressed. 

See also: How Millennials Revolutionized Life Insurance

Relevance of climate change for life and health insurance 

The impact of climate change on people’s lives is highly complex and depends heavily on certain factors. The following are four key elements to consider when evaluating and predicting how climate change affects life and health insurance: 

  1. Types of insurance products. The degree to which climate change affects insurers’ life and health insurance business depends on their product portfolio and associated risks. Health insurance, which carries morbidity risk, for example, may be exposed to more direct and short-term impacts compared with life insurance. But life insurance, particularly permanent life, could face long-term and possibly larger mortality risks. Annuity, critical illness or long-term care products also have their own types of risk exposure.
  2. Demographics of insureds. Demographic and sociodemographic elements, including age, occupation, income and education, heavily influence people’s vulnerability and resilience to climate change risks. Studies find that those in lower socioeconomic groups, as well as children and older people, tend to be overexposed to the impacts of climate change. Regarding the insured population, medical underwriting, during which some applicants have to be turned down due to preexisting health conditions, acts as a filter that leads to an insured sub-population with better average health compared with the general population.
  3. Geographical consideration. Insureds’ geographical factors naturally influence their climate risk exposure, such as extreme temperatures, humidity, air quality, catastrophic events and ecological conditions. The fatality rate caused by many diseases is also influenced by the quality and availability of access to local healthcare, which is linked to people’s places of residence.
  4. Adaptation. The long-term trajectory of response to climate change’s impacts on biometric factors is highly dependent on the speed, size and quality of adaptation. It has been shown, for instance, that the death toll of heat waves is not driven by absolute temperatures but by relative increases against what people are used to. Even small changes in behavior, such as checking for ticks after a walk or draining stagnant water pools, and preventive measures such as ensuring hydration and rest during peak exposures, can move the needle a lot when implemented broadly and consistently.   

Outlook

Insurers can gain significant benefits from understanding climate change and its most relevant aspects and from partnering with stakeholders, including the medical community, researchers, reinsurers and climate experts. The aim is to better understand the impact of climate change on the mortality and morbidity of the insured population and, ultimately, on the insurers’ business. The common goal will always be to support individuals in living long and healthy lives, now and in the future

For more in-depth information, see the multi-part article, “The relevance of climate change for life insurance,” published by SCOR.


Irene Merk

Profile picture for user IreneMerk

Irene Merk

Irene Merk is an actuary and emerging risks ambassador for SCOR, a global reinsurance company providing services and solutions in property and casualty, life and health, and investments. It serves clients in more than 160 countries from its 35 offices worldwide.

 

Data Science Is Transforming Public Health

Key developments include improvements in disease prevention, health surveillance and delivery models.

ECG Machine

According to the Centers for Disease Control and Prevention (CDC), here are the numbers of annual deaths in the U.S. from leading causes: 

  • Heart disease: 695,547 
  • Cancer: 605,213
  • Accidents (unintentional injuries): 224,935
  • Stroke (cerebrovascular diseases): 162,890
  • Chronic lower respiratory diseases: 142,342
  • Alzheimer’s disease: 119,399 
  • Diabetes: 103,294 
  • Chronic liver disease and cirrhosis: 56,585
  • Nephritis, nephrotic syndrome and nephrosis: 54,358 

While we cannot prevent death itself, we can certainly make strides in reducing the mortality rate for many of these leading causes of death by harnessing the power of data science. The intersection of data science and public health is catalyzing transformative advancements, reshaping the healthcare landscape.

Key developments include improvements in disease prevention, health surveillance and delivery models. Data science's multifaceted impact on public health ranges from evolving methodologies and integration of diverse data sources to improved decision-making processes. Data science can highlight areas for intervention at a population level — such as identifying regions with high prevalence of a disease or areas lacking in healthcare resources. This can lead to more targeted prevention campaigns and healthcare strategies. 

The advent of machine learning, artificial intelligence and other data science methodologies have revolutionized public health by enabling analysis of complex, large-scale data sets. These techniques extract meaningful insights from various sources ranging from electronic health records, genomics and wearables to social media, enhancing our ability to predict diseases, detect them early and provide personalized medicine.

Predictive models, clustering algorithms and natural language processing have accelerated research and facilitated precise, tailored public health interventions. Applying machine learning algorithms to patient profiles can help make predictions about patients' health trends, automate routine tasks and even provide diagnosis based on the profile inputs, enhancing the overall interaction and engagement with patients. While it’s unlikely we will fully eradicate all these causes of death, leveraging data science can help us make significant strides in understanding, preventing, and effectively treating these diseases, ultimately improving quality of life and extending longevity. 

See also: Maximizing AI's Impact in Group Insurance

Benefits 

Patient Engagement: Data science tools can monitor a patient’s health in real time, allowing healthcare providers to engage patients by providing feedback and health tips. This helps improve health outcomes and patient satisfaction. 

Improved Communication: Patient interaction can be significantly improved with email reminders for appointments or regular check-ups, notifications for medication schedules and detailed explanation of their treatment plans. 

Automation of Routine Tasks: AI can automate routine administrative tasks such as data entry, appointment scheduling and billing, which frees healthcare professionals to focus on patient care. This results in cost savings in terms of labor hours and reduces human error. 

Predictive Analytics: AI can analyze patient data to predict health trends, prevent disease onset and determine the most effective treatments. This not only improves patient outcomes but can also decrease costs associated with unnecessary treatments or late interventions. 

Improved Diagnostics: AI can be used to analyze images and diagnostics tests more quickly and accurately than humans can. This speeds up the diagnostic process, decreases the need for repeat tests and can get treatments started earlier, all of which lead to cost savings. 

Patient Monitoring: AI systems can monitor patient conditions in real time, reducing the need for hospital admissions or lengthy hospital stays and minimizing the risk of readmissions, creating considerable savings. 

Drug Discovery: AI can speed the process of drug development and clinical trials, which are typically costly and time-consuming. By being able to analyze vast amounts of data quickly, AI can identify potential treatments faster and cheaper. 

Precision Medicine: AI can help create treatment plans customized to each patient's specific needs based on their genetics, lifestyle and other factors. This results in more effective care and can prevent the use of expensive, unnecessary treatments. 

Supply Chain Management: In healthcare facilities, AI can predict the need for resources such as hospital beds, medical equipment or even staffing. This allows for better usage and planning, reducing waste and associated costs. 

Telemedicine: AI-driven telemedicine platforms can decrease unnecessary hospital visits by providing virtual consults for non-emergency cases or regular follow-ups. 

See also: Why Becoming Data-Driven Is Crucial

Data Integration: 

Data science has greatly improved our ability to integrate diverse data sources, leading to more comprehensive insights into health determinants and outcomes. 

The use of data science in public health also favors the integration of traditionally siloed data sources ranging from clinical data to socio-economic and behavioral data. 

By combining clinical, environmental, socio economic and behavioral data, researchers can take a holistic approach to public health. As a result, professionals can identify socio-demographic disparities, spot environmental risk factors faster and evaluate intervention effectiveness more accurately. Through real-time monitoring and analytics, health practitioners can respond promptly to health threats and implement evidence-based interventions more effectively. 

Despite these advantages, data science's use in public health does present ethical considerations and challenges. Safeguarding privacy, ensuring data security and addressing algorithmic bias needs careful consideration to ensure equitable distribution of data science benefits across diverse populations. Establishing strong ethical frameworks, using data responsibly and communicating transparently are important for fostering public trust and upholding ethical principles. 

Data science plays a crucial role in shaping public health policies and strategies. Decision support systems powered by data science aid in evidence-based policy formulation, optimized resource allocation and streamlined healthcare delivery. Examination of case studies where data-driven insights informed successful public health interventions highlight the effectiveness of this approach. Integrating data science into public health governance opens avenues for innovative approaches to tackle challenging health issues. 

Collaboration and interdisciplinary partnerships emerge as pivotal themes, with the need for synergies among data scientists, epidemiologists, healthcare professionals, policymakers and communities being emphasized. Successful collaborations have led to the development of robust data infrastructures and empowered communities through participatory approaches. Democratic access to data science tools and knowledge is key to building a resilient public health ecosystem. 

In conclusion, it's clear that data science tools are critical in public health, revolutionizing research methodologies, enhancing decision-making processes and informing policy formulation. This article serves as a road map for researchers, practitioners and policymakers to harness the full potential of data science for global health betterment.


Mandhir

Profile picture for user Mandhir

Mandhir

Mandhir is a software development, senior engineering lead at Elevance Health.

He has two decades of experience specializing in software product development for healthcare, focusing on data science and analytics solution engineering, architectural design, data integration and reporting technologies.

How to Help Retailers on Cyber Risk

Retailers need to integrate application security posture management (ASPM) into their cyber risk management strategy.

Grayscale Photography of Chain

In recent years, retail companies have increasingly moved their applications and infrastructure to the cloud to take advantage of its scalability, flexibility and cost-effectiveness. However, this shift to the cloud has also introduced new security challenges, particularly in the realm of application security. Attackers are constantly looking for ways to exploit vulnerabilities in retail applications to gain access to sensitive data or disrupt business operations. To mitigate these risks, retailers need to adopt a comprehensive security posture management approach that covers both cloud security posture management (CSPM) and application security posture management (ASPM).

While CSPM solutions focus on monitoring and securing the cloud infrastructure itself, it's the ASPM solutions that secure the retail applications running on that infrastructure. ASPM is a holistic approach to application security that involves continuous discovery and monitoring, assessment, business logic exploitation and remediation of applications and their vulnerabilities across the entire software development lifecycle. It helps organizations identify and prioritize security issues and provides guidance and tools to help them mitigate and remediate vulnerabilities -- protecting firms from unauthorized data access; interception; manipulation; regulatory violations (including the payment card industry data security standard, or PCI DSS, and the General Data Protection Regulation, or GDPR); fraud; and disruption of services.

By integrating ASPM into their security posture management strategy, retail organizations can discover APIs in use they may not have known about, identify vulnerabilities in their applications, prioritize remediation efforts and ultimately reduce their overall security risk. Furthermore, by filling coverage gaps in CSPM, ASPM can help retail companies save money by avoiding costly security breaches, financial losses, compliance issues, reputation damage and downtime.

See also: Tackling the Surge in Cyber Premiums

To leverage ASPM to save costs and fill coverage gaps found in CSPM, follow these best practices:

1. Discover and prioritize critical applications - One of the biggest challenges for CSPM is discovering and determining which applications and services are most critical to the organization. ASPM can help by discovering all APIs in use, mapping those APIs to specific web and mobile applications, providing visibility into the security posture of all applications and identifying which ones have the most sensitive data. This information can help retailers prioritize their security efforts and allocate resources more effectively.

By focusing on the most critical APIs and applications first, organizations can save costs and reduce their overall risk exposure, particularly because they deal with so much sensitive customer information -- including financial transactions, addresses, purchase history and account details. They can also ensure that their security efforts are aligned with their business goals and objectives.

2. Automate security testing and compliance checks - Another way that ASPM can save costs and fill coverage gaps is by automating security testing and compliance checks. With the increasing complexity of cloud environments, manual testing and compliance checks can be time-consuming and error-prone. Automating these processes can help retail firms identify vulnerabilities and non-compliant configurations more quickly and accurately, helping to protect their reputation and consumers' private data, and build trust with customers.

By automating security testing and compliance checks, organizations can save costs on manual testing and reduce the risk of human error. They can also ensure that their security efforts eliminate regressions as features are added to cloud-native applications in today's dynamic environments.

3. Integrate security into the development process - ASPM can also help retail companies fill coverage gaps by integrating security into the software development process. By incorporating security scans into this process, retailers can ensure that security is built into the application from the ground up. This can help reduce the number of vulnerabilities that need to be remediated later.

4. Monitor application behavior in real-time - Another key aspect of ASPM is monitoring application behavior in real time. This involves using runtime tools that can detect and alert on suspicious activity, such as unauthorized access attempts or data exfiltration. By monitoring application behavior in real time, retail organizations can quickly detect and respond to security incidents, minimizing the potential impact on the business. Anomaly detection based on machine learning has become more mainstream, addressing these types of API and application-centric attacks in recent years.

5. Use automation to streamline remediation efforts - Remediating vulnerabilities can be a time-consuming and resource-intensive process. However, by using automation tools to streamline the process, retailers can reduce the time and effort required to fix vulnerabilities in application code, infrastructure-as-code (IaC), and cloud services. For example, some ASPM solutions can automatically provide Terraform and CloudFormation scripts to auto-remediate application- and API-layer exploits by hardening runtime production configurations. By using these tools to automate the remediation process, organizations can save time and reduce their overall security risk.

See also: Why Becoming Data-Driven Is Crucial

Integrate ASPM with CSPM

To get the most out of their security posture management efforts, retail companies should integrate ASPM with CSPM. By doing so, they can fill coverage gaps in CSPM -- including API discovery and vulnerability checks -- to identify and address vulnerabilities in their applications that cannot be detected by CSPM alone. This integration can also help organizations save costs by avoiding security breaches, compliance issues and fines and downtime caused by application vulnerabilities. Unlike CSPM, ASPM enables organizations to continuously monitor the security posture of applications and services so they can identify areas for improvement and remediate vulnerabilities and reduce risks.

Overall, ASPM is a powerful tool. By discovering all APIs, identifying and prioritizing critical applications, prioritizing remediation efforts, automating security testing and compliance checks, integrating security into the development process, using risk-based prioritization and monitoring for continuous improvement and auto-remediation, retail companies can reduce their overall risk exposure and ensure that their applications and data are secure.

The Threat From Quantum Computing

The urgency to offer post-quantum cryptographic and quantum resilient network solutions to customers has never been greater.

Green and Grey Circuit Board

In today's digital age, the demand for secure data storage and transmission has never been greater. With the advent of quantum computing, the traditional cryptographic methods we've relied on for decades are becoming increasingly more vulnerable. To address this emerging threat, businesses are seeking robust solutions that can help safeguard their sensitive information and protect that data from steal-now-decrypt later (SNDL) style attacks.

Hosting service and colocation providers offering robust network services are in a prime position to capitalize on this opportunity by partnering with leading post-quantum cryptography (PQC) companies to deploy PQC as a service to their customers. This article explores the immense business growth potential for these providers in the rapidly evolving world of data security and cryptographic agility and the huge upgrade to public key infrastructure (PKI) that is ahead.

See also: Unauthorized Use of Auto Claims Data

The Quantum Threat and Available Solutions

Before delving into the growth opportunities, it's crucial to understand the quantum threat. Quantum computers, with their unprecedented processing power, will have the potential to crack current cryptographic algorithms with ease. This has severe implications for organizations in industries such as financial services, insurance, healthcare and the public sector, as confidential information, financial transactions and communications could be compromised. The urgency to offer post-quantum cryptographic and quantum resilient network solutions to hosting customers has never been greater.

With this rapidly emerging critical threat creating a massive business opportunity, companies have emerged to help drive the PQC and cryptographic orchestration evolution. By leveraging innovative research and technological advancements, leaders in this space have developed a suite of crypto-agile post-quantum cryptographic solutions that are both highly secure and efficient based on proposed National Institute of Standards and Technology (NIST) quantum-resistant public key algorithms. These solutions are poised to protect enterprises from both quantum and classical threats and to ensure the continued confidentiality of their data.

Growth Opportunities for Hosting Service and Colocation Providers

By partnering and offering this technology, hosting service and colocation providers can differentiate themselves from competitors and capture part of this massive upgrade cycle. Security-conscious customers are increasingly looking for partners that can offer advanced data protection against quantum threats. This unique selling point can attract a broader customer base, including businesses that handle extremely sensitive information such as healthcare providers, financial institutions and government agencies.

Offering PQC as a service enables hosting and colocation providers to expand their network services portfolio in this greenfield market. This diversification not only caters to existing customers but also opens doors to new markets. With the addition of PQC services, providers can address the needs of businesses that previously may not have considered them as workable options.

One of the most compelling reasons for hosting service and colocation providers to offer these solutions is the potential for substantial revenue growth. As the demand for PQC grows, so does the revenue potential for those who can provide it. Moreover, offering cryptographic services can lead to more extended and profitable customer relationships. Hosting service and colocation providers can become the trusted provider of choice by protecting organizations' data from the oncoming quantum threat.

In addition, on Dec. 21, 2022, President Biden signed into law H.R.7535 - The Quantum Computing Cybersecurity Preparedness Act, encouraging federal government agencies to adopt technology that will protect against quantum computing attacks. This marks a milestone in the global effort to develop and deploy quantum-resilient cybersecurity. So compliance is now driving the entire federal government on an upgrade path to PQC.

One of the primary aims of hosting service and colocation providers is to safeguard their customers' data. By deploying PQC as a service or as a single-tenant capability, these providers offer a heightened level of security that can help retain and attract clients. Customers will appreciate the commitment to their data security, creating a compelling reason to choose providers that offer PQC in their mix of solutions.

Keeping up with the rapid pace of technological advancements in the field of cybersecurity can be daunting. Partnering with leading PQC vendors allows hosting service and colocation providers to tap into cryptographic expertise and stay at the forefront of PQC developments. Partnerships of this manner ensure access to the latest security measures, enhancing the providers' competitive edge.

See also: Top 10 Challenges for Data Security

Challenges and Considerations

While the growth opportunities are enticing, hosting service and colocation providers must also consider challenges associated with deploying PQC as a service. These include:

1. Investment and Infrastructure: Implementing PQC solutions may require years for some networks and a significant investment in hardware, software and expertise. Providers need to be prepared for these upfront costs and timetable requirements. However, these added costs and expanded timeframes can be mitigated by partnering with leading PQC providers.

2. Training and Expertise: Staff members will need training to manage and support PQC systems effectively. Ensuring that the workforce is equipped with the necessary ability is vital. This is why partnering with leading PQC solution providers is so important.

3. Customer Education: Hosting and colocation providers may need to educate their customers about the importance of PQC and the added value of the service. PQC partners that offer training and support services can best equip partners to meet these client needs.

4. Competition: As the market for PQC services grows, competition will intensify. Providers should enter the market early to gain a brand and market timing advantage and must be prepared to differentiate themselves and offer compelling value to customers. It's clear that partnering with a leading PQC company will give providers a significant advantage over their competition.

The increasing threat of quantum computing to traditional cryptography has opened a unique and lucrative growth opportunity for hosting service and colocation providers. Partnering to deploy advanced PQC as a service to their customers allows these providers to differentiate themselves in a crowded market, expand their service portfolio and tap into substantial revenue potential. Beyond financial benefits, this partnership offers enhanced data security for customers, access to cutting-edge technology, regulatory compliance and long-lasting customer relationships.

While challenges such as investment and training exist, the potential rewards far outweigh these obstacles, and partnering with the right PQC vendor can help alleviate these issues. Hosting service and colocation providers that embrace the quantum threat and take steps to protect their customers' data with PQC are well-positioned to thrive in the evolving landscape of data security.

In a world where data breaches and cyberattacks are becoming increasingly sophisticated, staying ahead of the curve is not just an opportunity but a necessity. Hosting service and colocation providers have the chance to be leaders in PQC data security, protect their customers and ultimately secure their own future in the rapidly changing digital ecosystem.


Stuart Oliver

Profile picture for user StuartOliver

Stuart Oliver

Stuart Oliver is the head of worldwide partner GTM and programs at QuSecure.

He has over 25 years' experience working in global technology leadership roles in partner Sstrategy and GTM, cloud, executive IT. management, product management and product marketing. He graduated from the Northern Alberta Institute of Technology.

Top 10 Challenges for Data Security

Data security is no longer a simple IT task and can't be solved with one tool. It's a strategic imperative that touches every level of an organization.

Brass-colored Metal Padlock With Chain

In the wake of widespread cloud adoption, organizations are grappling with massive data volumes and the consequent complexity of safeguarding this data. Data protection is a significant challenge, as more information is processed and stored in more locations than ever before.

For organizations, operationalizing data security is no longer a simple IT task and can't be solved with one tool or solution. It's a strategic imperative that impacts every level of an organization. From diverse data sources and evolving threat landscapes to the nuances of compliance and the human element of security, the challenges are multifaceted

While technology offers advanced tools and solutions to boost defenses, the key challenge lies in seamlessly integrating these tools into an organization's operations. Essentially, it's about striking a balance between robust security and operational efficiency - and ensuring that protective measures enhance rather than hinder business processes. A holistic approach that encompasses technology, processes and people is crucial for success.

There are numerous operationalization challenges for organizations, but there is one common thread: Before overcoming these hurdles, organizations must understand where data is located, the context of the data and if it is at risk.

Let's explore the top 10 operationalization challenges for organizations and how they can be addressed.

1. Resource Constraints

Implementing robust security measures often requires a large financial investment as well as dedicated time and expertise. Hiring skilled cybersecurity personnel is expensive, assuming you can even find the right personnel, and continual training is essential. The deployment of advanced security tools and infrastructure places an additional strain on an organization's budget.

Data protection solutions with a streamlined implementation process eliminate the need for extensive resources. Agentless, API-based solutions are easy to deploy and can deliver value in days, without any upfront work required. As an example, today's managed data security posture management (DSPM) security solutions enable any size organization to streamline cybersecurity operations and significantly reduce the burden on in-house IT teams.

See also: Risks, Trends, Challenges for Cyber Insurance

2. Diverse Data Sources

Data is everywhere, and organizations use a plethora of platforms and services -- from cloud storage solutions like Gdrive and Box, to communication tools like Slack, and collaboration platforms like SharePoint. Even more concerning is that sensitive data is no longer just structured. At least 80% of an organization's data is unstructured, meaning it's embedded in millions of financial reports, corporate strategies documents, source code files and contracts created by CFOs, general managers, engineers, lawyers and others.

To address this challenge, today's DSPM solutions are designed to control information flows between departments and third parties, ensuring that data at risk is identified and sensitive data remains protected -- regardless of its location.

3. Data Classification

Data classification is the foundation upon which many security measures are built. By categorizing data based on its sensitivity and importance, organizations can apply appropriate protection measures. But the sheer volume of data generated and stored today makes manual classification a herculean, if not impossible, task, and continually updating classification criteria in response to an evolving data landscape is crucial.

To address this, best-of-breed AI-based classification solutions leverage sophisticated machine learning technologies to autonomously scan and categorize documents. With the latest AI models for fast and accurate data discovery and categorization, organizations can eliminate the need for manual classification, which has proven to be both inaccurate and inefficient.

4. Access Governance

Some data is public, some is confidential and some is strictly on a need-to-know basis. Managing who has access to what data is a cornerstone of data security and requires the definition of access permissions and continually reviewing and updating them. Ensuring that permissions are always up-to-date and adhere to the principle of least privilege -- where individuals have only the access they need and nothing more -- is a constant challenge, especially in large, dynamic organizations.

Data access governance (DAG) establishes and enforces policies governing data access and usage and plays a key role in ensuring that only authorized individuals can access sensitive information. This process is enhanced by a deep contextual understanding of both structured and unstructured data, which helps in keeping access permissions current and aligned with the principle of least privilege. DAG solutions enable organizations to comply with access and activity regulations, demonstrate control to auditors and adopt zero-trust access practices.

See also: Data Breaches' Impact on Consumers

5. Rapid Remediation

Rapid remediation is crucial to minimizing damage and protecting sensitive data when a security risk or breach is identified. Remediation actions include revoking access permissions, isolating affected systems or notifying affected parties. But rapid remediation requires swift action, clear protocols and a well-coordinated response team. Organizations must have these protocols in place, understand what data is at risk and ensure that all stakeholders know their roles and responsibilities in the event of a security incident.

Advanced data security platforms are designed to discover and remediate risks efficiently. These solutions can pinpoint data at risk due to inappropriate classification, permissions, entitlements and sharing. According to Concentric AI's Data Risk Report, each organization had 802,000 data files at-risk due to oversharing. Autonomous remediation capabilities in these platforms ensure that access issues are quickly addressed.

6. Compliance and Regulations

Different industries operate under various regulatory frameworks, each with different sets of data protection and privacy mandates. Operationalizing data security in this context means not only protecting data but also ensuring that protection measures align with legal and regulatory requirements.

Data security solutions that assist organizations in meeting regulatory and security mandates, demonstrating control to auditors and implementing zero-trust access are important in addressing this challenge. By detecting and remedying risks, these solutions help businesses comply with various privacy regulations, including managing right-to-know, right-to-be-forgotten and breach notification requests.

7. Constantly Evolving Threat Landscape

Today, as soon as organizations bolster their defenses, malicious actors evolve their tactics. Ransomware attacks, phishing schemes and advanced persistent threats require businesses to try to stay a step ahead. Continuous monitoring, updates and adaptations are crucial to counteract new and emerging threats.

Modern data security approaches go beyond static rules or predefined policies. Innovative analysis methods continuously compare data against its peers to identify anomalies and potential risks. This stance ensures that as data changes, its protection mechanisms evolve accordingly. AI models that leverage continuous monitoring and can learn from the data landscape help organizations address new risks as they emerge.

8. Complexity and Scope

Data security is a multifaceted domain that encompasses a myriad of components, from network security and access controls to encryption and authentication. Different data types, whether it's financial records, personal information or proprietary research, have unique security requirements. Coordinating these diverse components and tailoring security measures to different data types adds layers of complexity to the operationalization process.

Using advanced machine learning technologies, today's data security solutions autonomously scan and categorize data, adapting to its growing complexity and scope. They ensure protection for all data types and locations. Comprehensive analysis provides a complete view of data, ensuring protection for both structured and unstructured data, whether stored in the cloud or on-premises.

See also: Top 10 Challenges for Data Security

9. Monitoring and Auditing

Continuous monitoring is essential for keeping a vigilant eye on systems, data access patterns and user behaviors to detect anomalies or potential breaches. Regular audits are crucial to assess the effectiveness of security measures and identify areas for improvement. Conducting these audits, analyzing the results and implementing changes based on findings demand significant time and expertise.

Modern data security tools offer accurate data classification without manual rules or policies. With monitoring, these tools quickly identify any discrepancies or risks in data classification.

10. Integration With Existing Systems

Most organizations have a myriad of existing systems, tools and software in place. When a new data security solution is introduced, it's crucial that the solution integrates seamlessly with existing infrastructure. Disruptions, compatibility issues or data silos can undermine the effectiveness of security measures and create vulnerabilities.

Today's data security solutions are designed to integrate smoothly with established frameworks, such as those for data classification and management. This integration ensures that data classification is in line with existing security protocols, boosting the overall data protection strategy.

While challenges abound, technology approaches exist that can help organizations down the path of operationalizing data security. DSPM enables organizations to gain a clear view of their sensitive data: where it is, who has access to it and how it has been used. Best-of-breed DSPM solutions can autonomously discover, categorize and remediate data -- whether it's structured or unstructured and stored in the cloud or on-premises.

Robust DSPM solutions develop a semantic understanding of data and provide a thematic category-oriented view into all sensitive data. By investing in proper data management practices, and leveraging the right tools and expertise, companies can go a long way toward operationalizing their data security. By doing so, they can help accomplish the key goals around securing private data, making more informed decisions about data and threats, protecting private data and mitigating risks.


Karthik Krishnan

Profile picture for user KarthikKrishnan

Karthik Krishnan

Karthik Krishnan is founder and CEO at Concentric.

Prior to Concentric, he was VP, security products at Aruba/HPE. He was VP, products at Niara, a security analytics company.

He has a bachelors in engineering from Indian Institute of Technology and an MBA with distinction from the Kellogg School of Management, where he was an F.C. Austin scholar.

How Insurance Empowers Sustainable Winemaking

Shifting to more sustainable winemaking practices brings risks that require mitigation. Personalized beverage insurance can be the answer.

Barrels on Trailers

More and more people are coming to understand the importance of sustainability, and the wine industry is no different, but shifting to more sustainable winemaking practices brings risks that require mitigation. Personalized beverage insurance can be the answer to this problem.

Sustainability in the wine industry

Experts in the wine industry have yet to come to a consensus about what sustainability in this field would look like. However, there is broad agreement that new technologies and green management strategies have the potential to decrease the industry’s carbon footprint substantially. Avoiding pesticides and growing organic ingredients is one promising direction, and some wineries are also moving away from the conventional glass bottle, which is responsible for most of this industry’s negative impact on the planet.

Still, adopting new techniques or equipment might seem too risky to many wine companies. Shifting operations to a more sustainable footing means change, and human error can increase simply because equipment or procedures are different. Sometimes, this is purely because doing things on autopilot suddenly means doing them wrong — after all, why risk spoiling or otherwise losing wine, along with the returns that would come with it, when the traditional way has been working?

Yet sustainability is the wave of the future, and making the transition sooner rather than later will not only benefit the planet but will also benefit the company. 

See also: Bringing Innovation From Australia to U.S.

Building resilient, sustainable wineries and vineyards

Because customers are increasingly making purchasing decisions with the planet in mind, going green is also the smart choice for wineries, vineyards and other beverage companies. Indeed, a 2023 YouGov poll found that 68% of customers consider environmental factors when deciding which products or services to buy. Winemakers can extend their market share by appealing to these environmentally conscious customers.

In addition, vineyards, wineries and other beverage companies rely on consistent weather patterns and stable soil conditions. Climate change, however, is making the weather increasingly unpredictable and unleashing a plethora of ills across the globe, from wildfires and floods to droughts, fierce winds, destructive storms and irrigation difficulties, as well as diseases that take out entire seasons of crops. As a result, some experts believe certain wine regions in France, Spain, Portugal, Australia, South Africa and California will stop being productive by 2050.

Wineries and vineyards can best prepare for these challenges through comprehensive risk management plans. Customized beverage insurance is a key component of these, helping ease the transition to resilient, sustainable technologies and procedures by protecting the business.

Insurance mitigates risks during the green transition

First and foremost, beverage insurance can cover the value of your harvested fruit, in-process wine, libraried offerings and other beverages. That way, if you can’t bring your wine or other offering to market due to spoilage or some unforeseen issue, you can still receive the return on investment your hard labor is due. Beverage insurance can even cover your costs if you were to make a blending mistake.

Second, beverage insurance can cover vines, grapes, trellises and fences, wine caves and all relevant buildings, as well as your equipment. Accidents can happen when people are unfamiliar with gear or new layouts, but with this coverage you can fix or replace things without having to reach into your own pocket twice. These policies can also cover your wine while in transit, including international air shipments.

In addition, beverage insurance can shield your enterprise from a host of other ills, from the damage caused by natural disasters to the liability issues involved in having employees serving alcohol.

See also: Climate Change and Product Liability

How to personalize your beverage insurance

To get the most out of your insurance policy, however, don’t settle for a general one. Instead, work with an agent who specializes in companies like yours, has a track record of filing successful claims, can ask questions about your specific business and demonstrates an in-depth understanding of your industry.

Additionally, make sure to build a partnership with your insurance agent. In particular, keep them abreast of the changes your company is making in its quest for greater sustainability. That’s the only way to ensure your policy is up to date and won’t develop vulnerabilities that show themselves at the worst possible moment. Every time your venture adds or changes equipment, procedures or locations, your insurance agent should be among the first to know.

Those wineries and vineyards that make the transition to sustainable viticulture will reap the rewards. Position your company at the forefront of this zeitgeist with a comprehensive risk management plan that includes personalized beverage insurance.

Are P&C Insurers Ready for Generative AI?

The potential for advanced technology in claims management is there, but first you need a solid foundation. Here's how to build one. 

An artist’s illustration of artificial intelligence (AI)

Generative AI will have profound implications for P&C, but right now, the industry, like many others, is doing the hard work of evaluating where to apply the technology and how to do so safely. The promise of generative AI’s abilities to improve claims management is enormous, and its impact will eventually be transformative and live up to today’s hype. But this will take time. 

It’s easy to see the potential of the technology. Looking at business processes, one could say, “We could really use some summarization here,” or “a chatbot could totally handle this.” But actually incorporating generative AI responsibly into workflows that, for example, treat injured employees and assist motorists who have been injured in auto accidents is challenging given some of the inherent aspects of the technology. Addressing these characteristics requires thorough experimentation and care. 

In the short term, the P&C industry will spend more time preparing for this technology than it will implementing it. There’s no doubt, though, that generative AI is making its presence known in the industry. So as P&C companies evaluate and investigate how they can use this technology, they would also do well to begin laying the groundwork for success.

Clean up your data house

The first step is to get your company’s data house in order. Generative AI requires massive amounts of data to train the model, understand the company’s knowledge base and interact in a human-like manner. But simply having terabytes of data isn’t enough — it needs to be clean and accessible. Many insurers, the larger ones, in particular, have data stored in multiple locations across many different systems and environments. It’s difficult to access it, and, in many cases, IT doesn’t even have a good idea of what data the organization has in the first place.

Additionally, it’s important to have a data scientist on board to oversee the data cleanup and AI training. Even if your data is clean, formatted and accessible, you need to make sure you have the right kind of data for the intended use or you could introduce biases that will cause the generative AI algorithm to provide bad content. 

See also: 5 Ways Generative AI Will Transform Claims

Work with technology providers who understand our industry

You’ll need to pay close attention to data privacy, because most P&C uses for generative AI are going to require customer data. You certainly don’t want the model to start spitting out personal information about your customers to people who have no right to see it, and you want to make sure that you’re following federal and state laws if you use such data to train your models. 

Next, you’ll need to work with your technology providers to ensure that generative AI’s hallucination problem doesn’t cause harm. Generative AI’s superpower is its ability to create entirely new content, but occasionally, this content is completely made up. Sometimes, this can be simply annoying or, for those who rely on it too heavily, embarrassing, such as when it makes references to publications that don’t exist. This occurs when the technology generates incorrect or misleading results caused by several factors, including insufficient training data, incorrect assumptions made by the model or biases in the data used to train the model. 

But in a healthcare situation, hallucinated information could seriously harm or literally kill a patient. So it’s imperative P&C providers demand systems designed to monitor and detect when generative AI data is not accurate or doesn’t make sense. For this reason, it’s important to partner with technology providers that deeply understand P&C. Claims professionals are met with specific challenges, business processes and regulatory requirements that, if not understood by the user, will result in a generative AI deployment that, at best, adds no value and could cause serious, long-term harm, both to the business and policyholders. It takes time to train these models, and these models need reliable data. 

See also: A Reality Check for Generative AI

Start small

Once you have all of the above in place, you’re ready to deploy your first use case. And while it may feel anticlimactic to put in so much work preparing for only a small trial, if something goes awry, it’s easier to fix a small problem than a large one. Find a use case that will deliver concrete value to the business but also has a limited scope and exposes the organization to minimal risk. 

Success on a small scale will not only provide your team with important lessons about what works and what doesn’t, but it will also build confidence within the C-suite and the larger company, so when you do move on to bigger, more consequential uses, you’ll have earned a growing number of advocates supporting the project and pushing for its success.

Generative AI will become an increasingly important technology to P&C, and the benefits will be enormous. But they’re not going to arrive overnight, and carriers that get out over their skis with it will likely pay a high price for doing so. Right now, the best move a payer can make is to lay the groundwork for generative AI and, once the foundation is laid, test it out with small, low-risk uses, and build from there.


Mike Bishop

Profile picture for user MikeBishop

Mike Bishop

Mike Bishop is executive vice president, product and technology, at Enlyte, where he is responsible for technical product strategy, direction and execution. 

The Lawsuit That Had to Happen

A lawsuit should clarify a key issue in auto telematics... but perhaps at the cost of a class action and probes by the FTC and Congress.

Image
woman driving car

When I drive my car, who owns the data about the trip? Me? The car manufacturer? My insurer? A data broker?

Even as I've welcomed the spread of telematics, which I believe will not only let insurers price risk more accurately but will lead to safer driving, that question about data ownership has hung over the issue for me. 

It would seem that the answer is obvious: I own the data. Car makers, insurers and data brokers need my consent to learn about when, where and how I drive. But many companies, in their thirst for data, seem to be hiding the request for consent deep in the fine print that no one reads when signing up for insurance or downloading an app from a car maker or insurer — and reporting suggests that many customers are signed up for tracking without their agreement. 

A lawsuit filed recently against General Motors and LexisNexis Risk Solutions is likely to finally bring the consent and data ownership issues to a head. The plaintiff seeks class action status for the suit, and there may well be investigations by the Federal Trade Commission and Congress — one senator has already raised the prospect. 

At the very least, I suspect that anyone wanting data from drivers will have to be completely up-front about requesting consent. No more hiding in the fine print. At worst... who knows? Class actions and federal investigations can take unexpected turns.

Push, meet shove. 

"Florida Man" stories about people doing wildly irrational things there have been a meme for more than a decade now. There's even a website where you can find a headline from your birthday that becomes your link to Florida Man history — mine is "Florida Man Bites Neighbor's Ear Off Over a Cigarette." So I don't expect to get more than a chuckle out of headline that begins with the term, but then I saw this one in the New York Times last week:

Florida Man Sues G.M. and LexisNexis Over Sale of His Cadillac Data

Romeo Chicco’s auto insurance rate doubled because of information about his speeding, braking and acceleration, according to his complaint.

The story says Chicco applied for auto insurance in December and was startled when seven companies rejected him. When he finally obtained insurance, it was at nearly twice the premium he had previously paid. He asked his agent for an explanation and was steered to his LexisNexis file, where he learned, as the story puts it, that "his 2021 Cadillac XT6 had been spying on him."

For those familiar with auto telematics, the "spying" was routine. General Motors, which makes Cadillacs, had reported to LexisNexis about 258 trips that Chicco had taken over the past six months — how far he traveled, when he traveled and whether he was speeding, hard braking or accelerating.

The issue is whether GM and LexisNexis had his permission to gather the data and make it available to insurers. Chicco says GM eventually told him he signed up for OnStar's Smart Driver program, whose contract says the company can share data with third parties, but Chicco denies doing so. GM provided general comments to the Times about the program but wouldn't comment on Chicco's specific complaints. LexisNexis declined to comment to the Times.

"'What no one can tell me is how I enrolled in it,' Chicco said. 'You can tell me how many times I hard-accelerated on Jan. 30 between 6 a.m. and 8 a.m., but you can’t tell me how I enrolled in this?'”

A second New York Times article last week — yes, this seems like a theme they're going to dog — expanded the concerns far beyond Chicco. This article, too, began with a story of a driver who saw his premium soar because of information he didn't know his car was sharing — in this case, the driver was a 65-year-old who had never been responsible for an accident. Then the article zoomed out:

"Car companies have established relationships with insurance companies, so that if drivers want to sign up for what’s called usage-based insurance...it’s easy to collect that data wirelessly from their cars.

"But in other instances, something much sneakier has happened.... In recent years, automakers, including G.M., Honda, Kia and Hyundai, have started offering optional features in their connected-car apps that rate people’s driving. Some drivers may not realize that, if they turn on these features, the car companies then give information about how they drive to data brokers like LexisNexis.

"Automakers and data brokers that have partnered to collect detailed driving data from millions of Americans say they have drivers’ permission to do so. But the existence of these partnerships is nearly invisible to drivers, whose consent is obtained in fine print and murky privacy policies that few read.

"Especially troubling is that some drivers with vehicles made by G.M. say they were tracked even when they did not turn on the feature — called OnStar Smart Driver — and that their insurance rates went up as a result."

The reporter added that she has a GM car and signed up for Smart Driver but received no notice that her driving data could be shared with insurers. A GM spokeswoman pointed to OnStar's privacy statement, but the reporter said the relevant section "does not mention Smart Driver. It names SiriusXM as a company G.M. might share data with, not LexisNexis Risk Solutions."

For this article, LexisNexis provided a statement that said the company has “strict privacy and security policies designed to ensure that data is not accessed or used impermissibly.”

To me, the key result of whatever lawsuits and investigations ensue will be that all the players in the telematics market will need to be much more forthcoming about what data they're collecting and how they're going to use it. And that's a good thing, even if the adjustment will be painful for some.

Telematics will continue to promote safety by better identifying bad drivers and assigning them higher premiums. Even if more dangerous drivers opt out of sharing data, they'll still see their rates rise as better drivers opt in and leave generally riskier drivers behind. 

But there will surely be some bumps in the road before telematics settles into that generally rosy future. Just look at the apocalyptic ending to that first New York Times piece: 

"David Vladeck, a Georgetown law professor who previously ran the bureau for consumer protection at the Federal Trade Commission, said... he would expect an investigation by the F.T.C., as well as lawsuits by consumers against the automakers and data brokers.

“'Just wait for the avalanche,' he said. 'It’s coming.'”

Cheers,

Paul