Download

How AI, Tech Reduce Friction in Insurance

Insurers must embrace innovative technologies to meet elevated customer expectations and remain competitive in the digital age.

choices

When we contemplate the term “innovation,” we typically envision disruptive innovation—technologies that significantly change the landscape of a service or product. But it is really sustaining innovation, or continual improvements in product suites, that typify most industries. In recent years, sustaining innovation has given rise to powerful tools that insurance companies can use to minimize the “friction” that frustrates consumers and create new and more seamless customer experiences.

For insurers, the proliferation of friction-reduction technologies has become a double-edged sword. New solutions have the potential to eliminate or minimize service and performance issues that have caused headaches for customers for decades. At the same time, adjacent industries have already adopted new technologies that have resulted in elevated customer expectations. The result? Customers are likely to have less patience with insurers that are slow to address pain points in their customer experience, forms processes, and claims handling, among other processes.

Across all industries, 45% of consumers say they’ve stopped doing business with a company due to a poor experience, and 47% say they are willing to spend more money to receive a better experience or service. As the rise of artificial intelligence allows first-movers to rapidly elevate the quality of service they provide, insurance companies of all sizes should embrace technology to reduce friction and enhance experiences for customers as a core strategic priority.

A potent mix of technologies

Companies have always used technology to make both internal and customer-facing processes more efficient. What’s different now is that next-gen technology is allowing companies to eliminate friction to an extent never before possible.

This progress is not being driven by any single new solution. The industry has historically seen different waves of innovation that gradually displaced manual processes. Early on, tech stacks involving optical character recognition (OCR) and robotic process automation (RPA) were game-changers for automating repetitive processes. Later, cloud computing and API development allowed for increased interoperability and automation at scale. Now, companies are layering the next evolution of AI into this mix. It is now easier than ever for a disruptive innovation in one market to be applied to an adjacent industry. The interaction among all these solutions will hopefully propel the industry to a new era of frictionless processes.

Eliminating pain points for short-term gains

Removing friction from the customer experience will require both short-term and long-term strategies. In the short term, insurers should target the most prominent and persistent pain points that have plagued insurance company customers. For example, one of the biggest complaints from customers is that they are required to fill out the same information repeatedly in different forms.

Until recently, pre-populating forms with data already provided by the customer was a complicated task. Data is often stored in different formats in separate systems. Over the last few years, however, most large insurance companies have undertaken comprehensive data management overhauls designed to normalize and centralize data across the organization. Once companies have this “single source of truth,” they can tap into usable, reliable data for any and all uses—including pre-populating forms.

Today, companies are also harvesting data by feeding images of paper documents into natural language processing solutions and other AI applications that can parse non-harmonized data to identify and understand specific information, like pinpointing first, last, and middle names from various documents, and determining if a phone number is a cell phone or a home landline. Using these techniques to reduce the friction associated with duplicate paperwork can have a positive and immediate impact on the customer experience.

Long-term transformation

Insurers must also consider how AI and other technologies are transforming customer experiences over a longer horizon and set IT budgets and strategies to keep pace.

In general terms, companies should constantly be looking for ways to apply AI internally to reduce friction and make workflows more effective. For example, think of something as simple as a forgotten password. Most of us have had the experience of clicking a “Forgot Password” link and waiting for a reset email that never arrives. I, myself, often abandon a site when that happens in utter frustration, as I suspect others do as well. A function as basic as this can be rearchitected leveraging AI. 

Pattern detection algorithms on a user’s expected versus actual behavior, in addition to downtime detection, should result in a better experience. Did the user abandon their journey in frustration? Platforms should be able to course-correct by detecting these behaviors and triggering the next-best action. Correcting these issues is just one example of how removing friction from back-end systems can improve both customer experiences and business results. By preserving customer satisfaction and minimizing client attrition, such operational IT investments can deliver big ROI in the long term.

These initiatives will become even more important over time as insurance companies build out the omnichannel models needed to meet the expectations of today’s customer base. According to the results of Broadridge’s 2024 CX & Communication Consumer Insights, 90% of consumers want companies to honor their channel preferences, but only 31% think companies do a good job at that task. Allowing customers to interact with the company through their preferred channel of print or digital will require effective integration of all the company’s internal platforms.

The landscape of competing for customer attention is more competitive than ever. Consider the nouveau consumer; within two seconds, consumers make a decision whether or not to engage with a 15-second TikTok or Instagram video. In this world of microsecond decision processing, if content fails to hold a user’s attention, they immediately move on to other content. In such a setting, traditional long-form financial statements or insurance forms will not cut it. Going forward, borrowing from the social media industry, insurers will have to deliver content that is personalized, easily digestible, and engaging. Examples may include personalized videos explaining benefits or streamlined account statements that highlight the most important information.

Next-gen customer experience

Beyond these new types of content, the interaction of cloud computing, APIs, AI, and other innovations are unlocking new opportunities for insurers to revolutionize the way they interact with customers. These emergent technologies support what’s known in the industry as “data federation,” or the ability to pull data from separate providers, harmonize it, and integrate it for use. Going forward, insurance companies will use this capability to access both internal data warehouses and external companies and data sources for functions across the organization and business lines—from confirming data on a claims form to verifying physical addresses and driver licenses from policy applicants and even double-checking the certification and credentials of doctors and other providers.

As technology has evolved, the lines between hardware and software are becoming more blurred, resulting in truly seamless experiences for customers. It is typical for a user to replace their physical wallet with their phone; we now live in a world of digital identity cards, mobile wallets, and keyless entry for vehicles. Adjacent industries have been able to greatly improve due to the ubiquitous nature of new features in mobile devices. Extended to the automobile insurance industry, imagine a world in which your phone detects you’ve been in an automobile accident, automatically contacts your insurance company, and provides back data on where the crash occurred, which vehicle and driver were involved, and other pertinent information. Such a harrowing experience shouldn’t be further exacerbated by a painful insurance process. A future is near where most of these interactions will be completely automated.

Thoughtful deployment of AI

As companies use these technologies to upgrade to a next-gen customer experience, they must avoid creating new next-gen pain points. Chatbots are a prime example. Chatbots were one of the first AI applications employed by companies and used by consumers. The experience for customers has not been great. When asked to define the things that make up a great customer experience, 48% of consumers cite “making it easy to talk to a real person.”

In fact, the replacement of humans by chatbots and other voice and online AI-powered customer service applications is probably one of the reasons ratings for customer experience and service satisfaction have been falling recently, as opposed to increasing due to innovation. Across all industries, the percentage of consumers who believe the companies they do business with need to improve their CX has doubled, hitting 70% this year.

Theoretically, however, chatbots should be a win-win for both companies and customers. The applications can reduce costs, cut down on wait times, and deliver instant answers to customers. But the failure of the earliest version of this technology provides a valuable lesson: As you implement new technology, use the concept of friction reduction as your north star.

In the case of chatbots, a new solution aimed at making customer service more efficient and easier for users actually resulted in more friction and a lower CX for customers. Looking ahead, however, I’m confident that the integration of solutions and systems will eventually result in highly effective, AI-driven self-help solutions that will significantly reduce friction and usher in an era of vastly improved service for customers.

That level of friction reduction will be replicated in scores of places throughout the customer journey, resulting in increased efficiency and lower costs for insurers, and an elevated experience for customers.


Aman Mundra

Profile picture for user AmanMundra

Aman Mundra

Aman Mundra is vice president and head of composition for Broadridge's Customer Communications business.

He focuses on delivering next-gen capabilities and investing in scalable, omni-channel solutions, as well as dual-sided network effect platforms, Web 3 enablement and blockchain.

Mundra is pursuing his MBA at Columbia business school.

Embrace Automation to Eliminate 'Gen-AI-nxiety'

Embracing automation is key for underwriters to keep pace with the industry's growing demands for advanced risk assessment solutions.
business underwriting

“Brokers are constantly evaluating underwriters," insurance veteran Tony Tarquini said in a recent webinar that I participated in. As a digital transformation expert, I’ve increasingly witnessed the broker-underwriter partnership being tested.

In the market these days, underwriters are pressed for higher quote volumes while needing to stay focused on quality risk selection, and brokers are pushing for faster turnaround times. With the staggering amount of data available to analyze, is it fair to expect underwriters to produce fast and accurate quotes using legacy systems?

In a fiercely competitive market, both underwriting delays and inaccuracies are unacceptable. While data is essential to gain real insights to assess risk accurately, sifting through large volumes in multiple documents and screens results in a massive slowdown. Hence, there is a sincere need for insight-driven data orchestration and enhanced efficiencies through automation across the risk life cycle. But are underwriters ready for the big shift?

In the recent webinar, “The Underwriting Maturity Framework,” my colleague Lloyd Peters and I discussed the current maturity level of underwriters, exchanged ideas on how to inspire change in the community, and discussed the value of having a road map to go from process-centric to data-centric underwriting at their own pace.

Here’s how underwriters can embrace automation and eliminate "Gen-AI-nxiety":

The change is inevitable

Today, insurers are looking to stay ahead and are seeking advanced risk assessment solutions. The focus has shifted from cost and efficiency to risk selection, pricing, customer-centricity, and AI enablement. Hence, underwriters must embrace technology to keep pace with the growing demands of the industry. The great news is underwriters now have access to emerging technologies and AI to support this transition.

See also: What to Understand About Gen Z

The first step: Breathe

The good news is underwriters don’t need to attempt transformation overnight. To acknowledge the need for change, one must be convinced it’s the right thing to do. A great place to start is to use an underwriting maturity model to benchmark where you’re starting from. This model provides a powerful visual of an underwriter’s current capabilities, allows them to identify both independent and interdependent business priorities, and accordingly identifies their level of maturity so they know where they’re starting from.

The next step: Self-assessment

The underwriting maturity framework is a practical, easy-to-embrace mechanism to make the necessary shift from manual to smart underwriting. There are five stages to this model, and each stage signifies the current capabilities of the underwriter.

  • Stage 0: Manual Underwriting - Traditional manual, off-system underwriting with information sorted within documents and isolated Excel files. Work is allocated through email between teams.
  • Stage 1: Digital Underwriting - Enable existing underwriting flow with a structured workflow engine and document/data storage where teams can collaborate.
  • Stage 2: Connected Underwriting - Optimize the underwriting flow by leveraging technology to accelerate and automate key steps, allow data transfers, and replace manual steps across the submission-to-bind journey.
  • Stage 3: Augmented Underwriting - Harness the power of the business and operational data captured through digital underwriting to provide targeted insights to underwriters on portfolio position, market, and risk characteristics.
  • Stage 4: Smart Underwriting - Enable the system to make automatic, algorithmic underwriting decisions in certain scenarios and provide recommendations to underwriters in others.

The maturity framework allows underwriters to identify capabilities that are key to executing their underwriting strategy, which will be specific to the line of business, geography, and complexity of risk that they write. What’s meaningful for one carrier may be less important for another, so having clarity in that strategy (growth, new products, efficiency, etc.) will help determine where you go first. This model highlights how connected all the components through the value chain are, and why it is important to get your underwriting data needs understood, work with operations, data, and IT to craft the right roadmap together.

The final step: Outline goals

Once you have clarity on your current state, plot your deliverables around each phase that will support identified goals. Start putting the building blocks around the desired underwriting capabilities across the next six, 12, or 18 months. The aim is to move the current operating state into a more data-driven state that advances the strategy and where the collective teams see immediate benefit—dramatically improved quote turnaround times, better clarity, and consistency on risk selection with more accurate pricing.

See also: What Does Gen Z Want?

Remember

It is important to keep in mind that technology is only a part of the solution. Engagement from operations, data teams, application and infrastructure architects, etc. must be fully aligned to see success in smart underwriting. The way the insurance industry works is changing. The underwriting evolution is here. The tools are available. The potential is significant. Now is the time. Embracing automation is one of the best decisions in an underwriter’s career.


William Harnett

Profile picture for user WilliamHarnett

William Harnett

William Harnett is the head of business strategy and customer success at Send.

He spent 20 years at AXIS Capital where he was the deputy chief operating officer and digital COO, working with AXIS’ underwriters to spearhead the business’s digital transformation.

Legacy Systems: Modernize or Overhaul?

There may be an alternative approach worth exploring: leveraging our current core systems while integrating modern technologies effectively. 

computer servers

In today’s fast-paced insurance landscape, the allure of shiny new technology can be tempting. The promise of modernization often leads organizations to consider a complete overhaul of their core systems—claims, policy administration, and underwriting alike. 

But before diving headfirst into multimillion-dollar implementations that could take years to bear fruit, it’s worth pausing for thought. Are we sure that throwing out our existing systems is the best route? With digitalization on the rise and innovative solutions like robotic process automation and microservices making waves, there may be an alternative approach worth exploring: leveraging our current core systems while integrating modern technologies effectively. 

Let’s rethink this narrative together as we navigate through sales and distribution, claims management, policy workflows, and underwriting processes in a way that maximizes efficiency without sacrificing investment or time.

See also: 2-Speed Strategy: Optimize and Innovate

Resisting the urge to throw out your core claims, policy admin, underwriting systems

The instinct to replace legacy systems can be strong, especially when faced with the latest tech trends. However, core claims and policy administration systems often carry valuable data and institutional knowledge that shouldn’t be discarded lightly.

According to research conducted by Celent and reported in their "Dimensions: P&C Insurance IT Pressures & Priorities 2024: North American Edition," 91% of the carriers interviewed stated that growth and distribution are significant (56%) or moderate (35%) for 2023, with 85% stating process optimization/operational efficiencies are significant. Celent stated, “Growth is the clear top priority for insurers in 2024, potentially indicating they are optimistic about the current climate and are taking an offensive tack. Operating cost reduction and process optimization remained top of mind for carriers. Notably, operating cost reduction is a much higher priority than IT cost reduction, indicating businesses are seeing the value of IT.”

Instead of a complete overhaul, consider an assessment of your existing infrastructure. Many organizations find that their current systems can integrate with new technologies, such as microservices or digital distribution channels. This approach allows for modernization without the full disruption of a re-implementation.

For example, an international insurer faced multiple business challenges, including creating products and not having a digital way to sell and distribute their products. They had siloed systems and an on-premises core. By enabling the products, sales, and marketing teams to create insurance products and deliver them in weeks rather than months or even years, they were able to provide API integration with their affinity partners and launch new B2B and B2C web portals. They leveraged their on-premises, legacy claims, underwriting, and policy administration systems and modernized by adding orchestration and integration layers.

By keeping what works while introducing enhancements through automation and embedded insurance solutions, you can create a more agile environment. Small adjustments might yield significant improvements in efficiency and customer satisfaction without the costs associated with starting from scratch.

Reinvesting in your core doesn’t have to mean tearing everything down; it could simply mean building upon a solid foundation that's already in place.

Do you really want to reinvest in a multimillion-dollar implementation?

When determining whether to "rip and replace" your legacy systems or enhance them with a modernization layer, it's crucial to evaluate several factors related to customer experience, system capabilities, and future growth potential.

Instead of immediately opting for a complete overhaul, consider whether a modernization layer on top of your existing systems might be more effective. This approach can extend the life of your legacy systems while introducing new capabilities through microservices, digital distribution, and APIs.

To decide which path to take, ask yourself the following questions:

  1. Customer Experience: Are your external portals and internal systems user-friendly? Are customers and employees expecting more digital interactions and services? If your current systems fall short in delivering seamless digital experiences akin to those of leading e-commerce platforms, modernization may be necessary.
  2. Digital Capabilities: Can your current systems support "Amazon-like" digital experiences, such as online ordering, real-time inventory tracking, and customer self-service? If not, it might be time to consider either enhancing your systems with APIs and microservices or replacing them.
  3. Integration: Does your current process integrate seamlessly with other digital platforms (e.g., Insurtech leaders, e-commerce, ERP, CRM)? If your team is bogged down by manual workflows, a modernization approach that leverages microservices and APIs could streamline these processes.
  4. Scalability and Growth: Can your existing systems handle future growth and increased demand? If your current infrastructure lacks scalability, a complete system replacement or significant modernization might be necessary to support long-term business objectives.
  5. Digital Presence: How important is a strong digital presence for your long-term goals? If digital interactions and services are becoming more critical for your customers, modernizing your systems to improve these areas is essential.

See also: Time to Modernize Your Mainframe

By answering these questions, you can determine whether a "rip and replace" is necessary or if enhancing your current systems with modern tools is the better approach. The toolkit available—comprising microservices, digital distribution, and APIs—offers flexibility in modernizing your infrastructure without necessarily discarding legacy systems. This balanced approach ensures you meet both current operational needs and future business objectives.

Before embarking on a multimillion-dollar implementation, it’s also crucial to consider the resources required—not just financial investment, but also the time and personnel needed to manage such a project. The process of aligning stakeholders, engaging teams, and training on new systems is a massive undertaking that can disrupt daily operations and divert focus from other critical business activities.

The insurance industry is relentless, and every day counts. In a market where companies are continuously vying for customer attention and market share, can you afford to pause operations for another three to five years to implement a new system? During this time, your competitors won’t be standing still. They’ll be leveraging their core systems while innovating with digital solutions, potentially gaining an edge in the market.

Instead of committing to lengthy and resource-intensive re-implementations, consider modernizing what you already have. Think about integrating middle-layer solutions or microservices that enhance current capabilities, allowing you to retain the agility needed to adapt. Embracing digitalization today means focusing on efficiency while minimizing risk—a much smarter path forward in an ever-evolving market landscape.

How Telematics Can Keep Fleets Safe, Insurers Happy

Unified fleet management platforms with AI dash cams provide critical insights to improve safety, satisfy insurers, and optimize operations.

windshield view in rain

Juggling multiple software systems to manage a fleet is like trying to assemble a puzzle with missing pieces. You're constantly switching between platforms, searching for data, and hoping everything fits together. It's a time-consuming process that often leaves you with an incomplete picture of your fleet's health.

But the real problem with these data silos? They put your fleet at risk. Without a unified view of everything that’s happening, how can you identify trends or make data-driven decisions? Imagine guessing if a driver is speeding or if a truck is about to have a breakdown. Not exactly a recipe for safety or peace of mind.

Here’s a concern I recently heard from other fleet managers: Insurance companies are getting tougher. Without strong driver safety tech stacks that include telematics and AI-powered dash cameras, there’s talk about nonrenewal of policies or increased rates. Distracted driving accidents are on the rise, and insurers are starting to require proactive fleet safety programs.

A unified fleet management platform can serve as a command center for your entire fleet operation. Everything—GPS location, engine data, driver behavior captured by AI dash cams, maintenance schedules—is all in one place, accessible with a single click. No more switching between systems, no more piecing together reports. Just a clear picture of your fleet’s health, right at your fingertips.

See also: How to Reduce Distracted Driving

So, how can a unified platform with AI dash cams help you keep your fleet safe and your insurer satisfied? Consider the following:

  • Safer drivers: AI dash cams and telematics catch risky driver behavior events in real time and provide trending data to help prevent accidents. This lets you identify problem drivers and implement targeted coaching programs. Safer drivers mean fewer accidents, which translates to satisfied insurance companies.
  • Fewer breakdowns: Real-time engine diagnostics keep you ahead of the curve. Imagine catching a potential issue before it leaves your truck stranded on the side of the road. Maintenance not only reduces downtime but also minimizes the risk of accidents caused by mechanical failures. Less risk means insurers are more likely to see you as a safe bet.
  • Data-driven decisions: Finally, you have the data you need to make smart choices. Analyze trends, identify areas for improvement, and make informed decisions about everything from driver training to route optimization. Demonstrating a commitment to safety through data could potentially lead to better insurance rates.

See also: AI and a Vision for Safer Roads

Managing a fleet is tough. Don’t let data silos make it even harder. By implementing a unified fleet management platform with AI dash cams, you can gain the insights you need to prove your commitment to safety to your insurers, improve driver behavior, and create a safer, more efficient, and cost-effective fleet.


Erin Gilchrist

Profile picture for user ErinGilchrist

Erin Gilchrist

Erin Gilchrist is vice president of fleet evangelism at IntelliShift.

She brings 15 years of experience from Safelite AutoGlass, where she managed a fleet of more than 8,500 vehicles. A long-term member of the Automotive Fleet Leasing Association, she advocates for fleet leaders through her podcast, "Straight Talk on Fleet." 

The Growing Toll of Secondary Perils

Parametric reinsurance offers a new approach to managing the increasing threat of secondary perils, providing much-needed financial protection for insurers.

stormy weather

Property insurers have always needed to watch out for large losses that shock their balance sheets. Historically, the insurance industry has considered catastrophes the largest threats. Today, however, property insurers face an existential threat from another source: skyrocketing losses caused by secondary perils. In fact, secondary perils—led by severe convective storms—have surpassed catastrophes as the leading cause of insured loss.

Severe convective storms (SCS) are localized events accompanied by lightning, thunder, strong wind gusts, intense rainfall, and, in some instances, hail. An analysis by Aon found that from 1990 to 2022, U.S. SCS losses increased at an annual rate of 8.9%.

A problem for insurers and policyholders

In 2023, severe convective storms caused $64 billion in insured losses, 85% of those originating in the U.S., according to Swiss Re. This volume of loss in the U.S. alone resulted in ratings downgrades for dozens of insurers, and four companies became insolvent. Swiss Re notes that the fastest-growing category of disaster is medium-severity events, or those causing $1 billion to $5 billion in insured losses. More SCS events are falling into this category.

Options for insurers facing large losses from secondary perils are few but consequential for policyholders. Those options are:

  • Raise rates or deductibles, making coverage unaffordable for policyholders
  • Exclude coverage, reducing protection for secondary perils
  • Withdraw from markets where losses from secondary perils are heaviest

When secondary perils cause company insolvencies, policyholders can lose access to insurance coverage. That leads to serious economic consequences for individuals, businesses, and communities.

See also: Blind Spots in Catastrophe Modeling

Traditional reinsurance isn’t solving this problem

For catastrophe losses, insurers have a ready source of financial protection in reinsurance. Insurers already buy catastrophe reinsurance because they’re required to have it—but catastrophe reinsurance doesn’t cover the effects of the accumulation of losses associated with much more frequent secondary perils.

Traditional reinsurance is not solving the problem of secondary perils because it is generally not available for the aggregation of such losses. At one time, reinsurers offered aggregate cover but could not write it affordably within their risk appetite.

Another reason reinsurers have avoided covering secondary perils at scale is the limitation of catastrophe models. Cat models have evolved significantly since the early 1990s, and they work well for low-frequency, high-severity events—such as one-in-250-year and one-in-500-year events. These models have proven unsuitable for high-frequency, lower-severity events. Many secondary perils are one-in-five-year or one-in-10-year events.

Without reinsurance to spread the risk of secondary perils, insurers have had no real financial option, until now.

See also: Litigation Risks in 2024 and Beyond

A path forward to keeping property coverage available

The high loss frequency of sub-catastrophic events—that is, lower- to medium-severity secondary perils—makes traditional risk transfer solutions untenable. At the same time, paying steady volumes of high-frequency, lower-severity losses is unsustainable. For these reasons, a new approach to reinsuring secondary perils is needed.

One such solution is parametric reinsurance. Parametric risk transfer is a useful tool that is becoming more common for primary insurance risks, from personal travel to crop damage. The mechanics of parametric coverage are relatively simple: Based on defined parameters that can be reliably measured (examples include flight delays or cancellations, or a certain level of rainfall or hail in a defined period), an agreed amount of coverage is triggered when those parameters are met. There is no need for loss adjustment and the additional costs that process entails. Parametric insurance benefits the buyer as well as the capital provider by providing certainty and efficiency in transferring risk.

Until recently, however, parametric solutions have not been used in the context of reinsurance. Parametric reinsurance uses sophisticated modeling to assess secondary peril risks, but not in the same way that catastrophe models do. Its model does not predict specific events. Instead, it uses firsthand claims data from an insurer and verified historical weather data to estimate the likelihood of aggregate losses in a given year for that insurer.

The parametric reinsurance solution’s trigger, therefore, is not a single event but an aggregate dollar amount of modeled losses. Parametric insurance models secondary peril losses on the specific weather ingredients that generated historical claims on covered properties. In this way, the parametric solution resembles excess-of-loss reinsurance. This scalable solution is available at different attachment points, subject to the protection needs and budget of the insurer. For example, if a property insurer has an expected aggregate SCS loss of $100 million, it can buy parametric coverage that attaches at a point that makes economic sense for the insurer. 

This innovative solution fills a need that can keep insurance available for high-frequency, high-severity types of losses, on a portfolio basis.


Bill Clark

Profile picture for user BillClark

Bill Clark

Bill Clark is the CEO of Demex, a risk analytics and intelligence company offering reinsurance solutions for severe convective storms. 

He previously held executive roles at several technology companies, including Silicon Valley start-ups and firms backed by private equity. Clark also spent time with industry heavyweights JP Morgan and IBM. 

He was a professional tennis player before embarking on his corporate career.

Balancing AI Innovation With Consumer Protection

Regulators in the EU and U.K. want insurers to balance AI's potential with protecting consumers from potential bias and harm.

finding balance

The regulatory challenge around artificial intelligence (AI) is as broad as its potential applications. In response to the scale of the task ahead, the U.K. government’s white paper "A Pro-Innovation Approach to AI Regulation," published last year, emphasizes fostering and encouraging innovation. The ambition is clear: to create an environment that enables businesses to develop and invest in AI while protecting businesses and consumers from potential harms and some of the worst excesses of the technology.

Regulation today

Currently, AI is indirectly regulated in the U.K., meaning there are no specific U.K. laws designed to address AI. Instead, a range of legal frameworks are indirectly relevant to the development and use of AI.

AI systems fundamentally rely on large amounts of data to train the models that underpin these systems. Personal data is often used to develop and operate AI systems, and individuals whose personal data is used have all the normal rights under existing laws such as the General Data Protection Regulation (GDPR). These typically include rights of transparency, rights to data access, and, perhaps most importantly, existing rights under GDPR not to be subject to automated decision-making in relation to significant decisions, except under special circumstances.

Impact on insurance

The current approach to AI regulation in the U.K., as outlined in the government's white paper, relies heavily on existing regulatory frameworks. Particularly relevant for the insurance industry is financial services regulation and the role of the Financial Conduct Authority (FCA) and Prudential Regulation Authority (PRA).

The FCA and PRA will use their existing powers to monitor and enforce against the misuse of AI, applying existing principles such as consumer protection and treating customers fairly. These principles might be affected if an insurer relies on an AI system and predictive models to make pricing decisions.

Because there is a risk that some customers might be discriminated against or priced out of insurance markets through the increased use of AI, the FCA is carefully considering how to translate existing principles when regulating firms in the use and misuse of AI.

See also: Balancing AI and the Future of Insurance

Embracing the future

Most companies accept that it won’t be possible to hold back the tide of AI. As such, many leading businesses are already focusing on how to integrate AI into their operations and apply it to their existing business models, recognizing the need to embrace rather than resist change.

In the insurance industry, not all of this is new. For many years, insurers have used algorithms and machine learning principles for risk assessment and pricing purposes. However, new developments in AI make these technologies increasingly powerful, coupled with the explosion in other forms of AI, such as generative AI, that are more novel for insurance businesses. Consequently, a key challenge for the industry is adapting and upgrading existing practices and processes to account for advancements in existing technology while also embracing certain innovations.

Regulation tomorrow

The EU AI Act is one of the world's first comprehensive horizontal laws designed specifically to focus on the regulation of AI systems, and the first to command widespread global attention. It is an EU law, but importantly, it also has extraterritorial effect. U.K. businesses selling into the EU or using AI systems where the outputs affect individuals in the EU are potentially caught by the law.

The EU AI Act applies to all parts and sectors of the economy. Crucially, it is also a risk-based law that does not try to regulate all AI systems and distinguishes among different tiers of risk in relation to the systems it does regulate. The most important category of AI system under the act is high-risk AI systems, where the vast majority of obligations lie.

Most of these obligations will apply to the provider or developer of the AI system, but there are also obligations that apply to the system’s user or deployer. The insurance industry is specifically flagged in the EU AI Act as an area of concern for high-risk AI systems. The act explicitly identifies AI systems used in health and life insurance for pricing, underwriting, and claims decision-making due to the potential significant impact on individuals’ livelihoods.

Because the majority of the obligations will sit with the system’s provider, businesses building their own AI tools—even if those tools depend on existing models such as a large language model to develop an in-house AI system—will be classified as a provider of that AI system under the act.

As a result, businesses will need to understand the new law, first determining in which areas they are caught as a provider or deployer, before planning and building the required compliance framework to address their relevant obligations.

The regulation of AI had one brief mention in July’s King’s Speech, specifically in relation to the establishment of requirements around the most powerful AI models. This means the EU AI Act is the most pressing law on AI likely to be applicable to insurers in the coming years. Other relevant parties, such as the Association of British Insurers (ABI), have provided guidance to their members on the use of AI.

See also: Two Warnings About AI

Navigating AI

For the insurance industry and other sectors to make the most out of AI, businesses will first need to map out where they are already using AI systems. You can only control what you understand. Similar to when businesses first began their GDPR compliance programs, the starting point is often to identify where within the organization they are using personal data and, in particular, where they are using AI systems that are higher risk because of the sensitivity of the data they're processing or the criticality of the decisions for which they are being used.

Once the mapping stage is complete, organizations should then begin the process of building a governance and risk management framework for AI. The purpose is to ensure clearly defined leadership for AI is in place within the business, such as a steering or oversight group that has representation from different functions, articulating the business's overall approach to AI adoption.

Organizations will also need to decide how aggressive or risk-averse they want their business to be regarding the use of AI. This includes drafting key policy statements that will help clarify what the business is prepared to do and not do, as well as defining some of the most fundamental controls that will need to be in place.

Following this, more granular risk assessment tools will be needed for specific use cases proposed by the business that can be assessed, including a deeper dive into the associated legal risks, combined with controls on how to build the AI system in a compliant way that can then be audited and monitored in practice.

The approach a business takes will also depend significantly on whether they buy their AI systems from a technology vendor or instead buy the data they need for their own in-house AI system, as well as potentially selling AI to their own customers at the other end of the pipeline. An insurance broker, for example, might sell certain services to an insurer that depends on the use of AI and will therefore need to consider how they manage their risk at both ends of that pipeline in terms of the contracts, the assurances they get from their vendors, and whether they are prepared to give the same assurances to their customers.

The challenge with AI at present is that use cases are continuously emerging, so demand is increasing. Therefore, the creation of governance frameworks and processes needs to be designed alongside the business processes to assess and prioritize the highest-value AI activities and investments. Governance, business, and AI teams need to work side-by-side to embed appropriate processes within the emerging use cases, which can often save significant work later.

The EU AI Act is a complex piece of legislation and will be a significant challenge to construct the frameworks needed. Success will not only rely on the quality of data and models used and good governance but also on adopting an approach to the new law that is proportionate and builds confidence to enable businesses to make the most of future AI opportunities.


Chris Halliday

Profile picture for user ChrisHalliday

Chris Halliday

Chris Halliday is global proposition leader for personal lines pricing, product, claims and underwriting at WTW's Insurance Consulting & Technology business.

Big Tech Tackles Wildfires

"What happens if you set a region full of technology entrepreneurs and investors on fire? They start companies."

Image
wildfire

When I caught up on my news reading over the long holiday weekend in the U.S., I felt like I should find shelter somewhere, or at least go back to bed, pull up the covers, and cover my head with a pillow. 

It seems the hurricane season in the Atlantic is about to pick up again, after a quiet few weeks, and could produce a perilous September. Meanwhile, scientists said climate change could produce baseball-sized hail, as convective storms in the U.S. continue to get worse, and is causing a wave of damage and death from lightning strikes in India. And wildfires continue to rage, to the point that Allstate just got approval to raise homeowners insurance rates 34% in California. 

Yikes.

But there were also glimmers of hope, at least on the wildfire front. As one article said, in a reference to Silicon Valley, "What happens if you set a region full of technology entrepreneurs and investors on fire? They start companies. Dozens of start-ups, backed by climate-minded investors with more than $200 million in capital, are developing technology designed to tackle a fundamental challenge of the warming world."

Some of those efforts seem to me to have real promise. 

Let's have a look.

The New York Times article I quoted, which carries the clever headline, "Silicon Valley Wants to Fight Fires With Fire," describes two promising ideas, in particular.

The first, from Kodama Systems, is a way to accelerate the thinning of forests to reduce the amount of fuel that can ignite in a wildfire. 

The article says:

"In 2022, the U.S. Forest Service set a target for 50 million acres to be treated — thinned, pruned or burned — on public and private lands over the next decade. In 2023, 4.3 million were treated, including two million acres of prescribed burning — and that was a record. To keep pace, treatment would need to grow by a third this year."

To help with that, Kodama is automating work done by skidders, which are massive machines with a bulldozer blade on the front and a grapple on the back that can grab and drag trees after they're cut or knocked down. 

Currently, a driver operates a skidder from the cab for a 12-hour shift, but Kodama's AI handles enough of the work that a remote operator can run two skidders at the same time. The skidders also can operate in the dark, using lidar and other sensors to map their surroundings.

Having one person run two skidders at the same time, working around the clock, obviously improves the productivity of both the workers and the equipment. The ability to operate remotely is also key, given that it's hard to find enough people who want to do the hard, hot work of thinning forests. Kodama has operated skidders in the U.S. from London, so the remote operators wouldn't even have to work through the night; they can be many time zones away, working from the comforts of home during normal, daylight hours. 

For now, any limbs and brush that can't be used for commercial purposes at a sawmill are piled up and burned, but Komada is experimenting with the possibility of burying it to keep carbon dioxide from being released into the atmosphere when the material is burned.

The other startup that really struck me in the TImes article is BurnBot, which is used to create firebreaks that can protect a community from a wildfire. This concept could be especially powerful because it won't just be used for routine maintenance, like Komada's thinning of forests, but could be used in an emergency if a fire breaks out nearby. 

The Times describes the BurnBot as basically an upside-down propane grill. The device, which looks to me a bit like a Zamboni, trundles over the ground, incinerating everything beneath it in a five-foot-wide swath and then putting out the fire with water. 

The article says:

"Alongside a highway, this protective line could prevent ignition caused by passing cars; checkerboarding a large stretch of land, it could allow for controlled burns that normally require dozens or hundreds of people and ideal weather conditions.

"The CalFire chief Jim McDougald, who works on fuel reduction efforts across the state, said firebreaks like these gave his firefighters time to protect the community of Shaver Lake during the rampaging 2020 Creek fire."

I can picture the work being done with the BurnBot because it's being tested near "the Dish," the giant satellite dish on a hilltop next to the Stanford campus. I used to cycle by it all the time when we lived in the area and can still see all the dead grass on the hillside that is there this time of year.

There has also been some good news about drones that can detect wildfires sooner than happens now. From last week, here is an article about swarms of drones that use "AI technology—incorporating thermal and optical imaging—... to automatically detect and investigate fires, and relay all the information to the fire team. Under the supervision of fire and rescue teams and using swarm technology,... the drones can then intelligently self-coordinate as first responders to rapidly deploy fire retardant onto the fire, monitor the situation and return to base."

The Times mentions some other startups that show promise for early detection: Pano, which sets up monitoring stations and uses AI to spot wildfires quickly; Overstory, which uses satellite and aerial imagery to help utilities reduce the risk that vegetation causes to their power lines; and Treeswift, which creates "digital twins" that let utilities model their environments and reduce the risk to their power lines. Rain uses autonomous helicopters to put out blazes.

None of these startups will help much in what figures to be a brutal September, as hurricane activity picks up in the Atlantic, as lightning lashes India, and as baseball-sized hail becomes more common. But maybe they can make a difference by a September or three from now.

Cheers,

Paul

P.S. As long as I'm on the subject of drones, here are two video clips that show how ubiquitous and versatile they have become. 

One is deadly serious. This clip shows a Ukrainian "dragon fire" drone setting ablaze a stand of trees that Russian soldiers are using to hide their emplacement. 

The second is just a goof, from a clever person with too much time on his hands who decides to scare the pants off someone. 

A Data Strategy for Successful AI Adoption

Despite significant investments in AI, many organizations struggle to derive measurable value.

artificial intelligence

The landscape of technology adoption, with artificial intelligence (AI) at the forefront, is rapidly changing industries and economies across the globe. The meteoric rise of ChatGPT to 100 million users in 2023 is a testament to the rapid integration of AI technologies into daily life. This whitepaper analyzes the required evolution of the data strategy in organizations aiming to harness the full potential of AI.

1. Background

The rapid growth and widespread adoption of AI technologies across industries underscore the urgency for organizations to adapt. Grasping the scale of AI's economic influence and its transformative effects on various business functions can help us appreciate the critical role that data plays in driving AI success.

The Economic Impact of AI

Generative AI, a branch of artificial intelligence, is poised to create substantial economic benefits globally, estimated to range between $2.6 trillion and $4.4 trillion annually. This technology is expected to notably affect higher-wage knowledge workers, accelerating productivity growth globally. Approximately 40% of working hours could be influenced by AI, leading to significant job transformations, particularly in advanced economies where around 60% of jobs could be affected. While North America and China are projected to reap the most benefits, Europe and developing countries may experience more moderate increases.

Generative AI's economic impact isn't confined to improving productivity. It extends to reshaping market dynamics, altering competitive landscapes, and allowing the creation of new business models. In sectors like healthcare, generative AI can enhance diagnostic accuracy, personalize treatment plans, and streamline administrative tasks. Similarly, in finance, it can optimize trading strategies, improve risk management, and enhance customer service through intelligent chatbots. The ripple effects of these changes will require successful businesses worldwide to adapt and evolve much faster than their competitors.

Impact Across Business Functions

According to McKinsey, the potential of generative AI will extend across various business functions. In the short term, it is expected that 75% of its value will concentrate in customer operations, marketing and sales, software engineering, and research and development (R&D). Industries such as banking, high-tech, and life sciences are anticipated to witness the most substantial revenue impacts from generative AI.

See also: 'Data as a Product' Strategy

Customer Operations

In customer operations, AI can automate routine inquiries, enhance customer satisfaction through personalized interactions, and provide predictive insights that help anticipate customer needs. Advanced AI systems can analyze customer behavior and preferences, enabling businesses to offer tailored products and services, thus driving customer loyalty and increasing lifetime value.

Marketing and Sales

For marketing and sales, AI-driven analytics can optimize targeting, streamline lead generation processes, and improve conversion rates. AI algorithms can analyze vast amounts of data to identify patterns and trends much faster than human analysts. This capability allows marketers to craft highly personalized campaigns that resonate with individual customers, thereby maximizing the return on marketing investments.

Software Engineering and R&D

In software engineering and R&D, generative AI is accelerating the development process by automating coding tasks, identifying bugs, and suggesting improvements. A prime example of this is GitHub Copilot, an AI-powered code completion tool developed by GitHub and OpenAI. Copilot assists developers by suggesting code snippets, entire functions, and even complex algorithms based on the context of their work. It can significantly speed up coding processes, reduce repetitive tasks, and help developers explore new coding patterns. For instance, a developer working on a sorting algorithm might receive suggestions for efficient implementations like quicksort or mergesort, complete with explanations of their time complexity. AI-driven simulations can also enhance R&D efforts by predicting outcomes of various experiments and guiding researchers toward the most promising avenues. This acceleration not only reduces time-to-market but also fosters innovation and enhances competitive advantage.

2. Challenges Organizations Are Facing With AI

Despite significant investments in data and AI, many organizations grapple with deriving measurable value. Vantage Partners and Harvard Business Review Analytic Services find that 97% of organizations are investing in data initiatives. Ninety-two percent are working with AI/ML in either pilot phases or production. Despite this, 68% fail to realize measurable value from AI. A disconnect exists: While 74% have appointed chief data or analytics officers, 61% lack a data strategy to support machine learning and data science.

The primary reasons for this disconnect include inadequate data quality, lack of skilled personnel, and insufficient integration of AI initiatives with business strategies. Many organizations collect vast amounts of data but struggle with ensuring its accuracy, consistency, and relevance. Additionally, the shortage of AI and data science talent hampers the effective implementation of AI projects. Lastly, without a clear strategy that aligns AI efforts with business objectives, organizations find it challenging to translate AI capabilities into tangible business outcomes.

3. AI Models and Related Learning Processes

Understanding AI models and their learning processes is fundamental to developing an effective data strategy. This knowledge illuminates the specific data requirements for various AI applications and informs how organizations should structure their data pipelines. By first exploring these technical foundations, we establish a clear context for the subsequent discussion on data strategy, ensuring that proposed approaches are well-aligned with the underlying AI technologies they aim to support.

Supervised vs. Unsupervised Learning

In the context of AI, supervised and unsupervised learning represent two primary methodologies of models.

  • Supervised Learning

Supervised learning relies on labeled data to train models, where the input-output pairs are explicitly provided. This method is highly effective for tasks where large, accurately labeled datasets are available, such as image classification, speech recognition, and natural language processing. The effectiveness of supervised learning is heavily contingent on the quality and accuracy of the labeled data, which guides the model in learning the correct associations. For enterprises, creating and maintaining such high-quality labeled datasets can be resource-intensive but is crucial for the success of AI initiatives.

The vast majority of enterprise AI use cases fall under supervised learning due to the direct applicability of labeled data to business problems. Tasks such as predictive maintenance, customer sentiment analysis, and sales forecasting all benefit from supervised learning models that leverage historical labeled data to predict future outcomes.

  • Unsupervised Learning

Unsupervised learning, on the other hand, involves training models on data without explicit labels. This approach is useful for uncovering hidden patterns and structures within data, such as clustering and anomaly detection. While unsupervised learning can be powerful, it often requires large volumes of data to achieve meaningful results. Enterprises may find it challenging to gather sufficient amounts of unlabeled data, especially in niche or specialized industries where data is not as abundant.

The lack of large datasets within many enterprises to train unsupervised models underscores the importance of data-centric approaches. High-quality, well-labeled datasets not only enhance supervised learning models but also provide a foundation for semi-supervised or transfer learning techniques, which can leverage smaller amounts of labeled data in combination with larger unlabeled datasets.

See also: Data Mesh: What It Is and Why It Matters

The AI Training Loop

AI systems need to be performant and reliable. The AI training loop is a critical process that underpins the development and deployment of such models. Each AI training loop consists of key stages that include data collection and preparation, model training, evaluation and validation, model improvement through tuning and optimization, and monitoring. Each phase is crucial for building robust and reliable AI systems, with a strong emphasis on data quality and iterative improvement to achieve optimal performance.

AI Process / Experimentation
  • Data Collection and Preparation

The first step in the AI training loop is the collection and preparation of data. This phase involves gathering raw data from various sources, which may include structured data from databases, unstructured data from text and images, and streaming data from real-time sources. The quality and relevance of the data collected are paramount, as they directly influence the effectiveness of the AI model.

Data preparation includes cleaning the data to remove inconsistencies and errors, normalizing data formats, and labeling data for supervised learning tasks. This process ensures that the data is of high quality and suitable for training AI models. Given that the success of AI largely depends on the quality of the data, this phase is often the most time-consuming and resource-intensive.

  • Model Training

Once the data is prepared, the next step is model training. This involves selecting an appropriate algorithm and using the prepared data to train the model. In supervised learning, the model learns to map input data to the correct output based on the labeled examples provided. In unsupervised learning, the model identifies patterns and relationships within the data without the need for labeled outputs.

The training process involves feeding the data into the model in batches, adjusting the model parameters to minimize errors, and iterating through the dataset multiple times (epochs) until the model achieves the desired level of accuracy. This phase requires substantial computational resources and can benefit from specialized hardware such as GPUs and TPUs to accelerate the training process.

  • Evaluation and Validation

After training, the model undergoes evaluation and validation to assess its performance. This step involves testing the model on a separate validation dataset that was not used during training. Key metrics such as accuracy, precision, recall, and F1-score are calculated to measure the model's performance and ensure it generalizes well to new, unseen data.

Validation also includes checking for overfitting, where the model performs well on training data but poorly on validation data, indicating it has learned noise rather than the underlying patterns. Techniques such as cross-validation, where the data is split into multiple folds and the model is trained and validated on each fold, help in providing a more robust assessment of model performance.

  • Model Tuning and Optimization

Based on the evaluation results, the model may require tuning and optimization. This phase involves adjusting hyperparameters, such as learning rate, batch size, and the number of layers in a neural network, to improve the model's performance. Hyperparameter tuning can be performed manually or using automated techniques like grid or random searches.

Optimization also includes refining the model architecture, experimenting with different algorithms, and employing techniques like regularization to prevent overfitting. The goal is to achieve a balance between model complexity and performance, ensuring the model is both accurate and efficient.

  • Deployment and Monitoring

Once the model is trained, validated, and optimized, it is deployed into a production environment where it can be used to make predictions on new data. Deployment involves integrating the model into existing systems and ensuring it operates seamlessly with other software components.

Continuous monitoring of the deployed model is essential to maintain its performance. Monitoring involves tracking key performance metrics, detecting drifts in data distribution, and updating the model as needed to adapt to changing data patterns. This phase ensures the AI system remains reliable and effective in real-world applications.

  • Feedback Loop and Iteration

The AI training loop is an iterative process. Feedback from the deployment phase, including user interactions and performance metrics, is fed back into the system to inform subsequent rounds of data collection, model training, and tuning. This continuous improvement cycle allows the AI model to evolve and improve over time, adapting to new data and changing requirements.

4. Why Data Is as Important as Model Tuning

Data quality and quantity directly affect model performance, often surpassing the effects of algorithmic refinements. A robust data strategy achieves a critical balance between data and model optimization, essential for optimal AI outcomes and sustainable competitive advantage. This shift from a model-centric to a data-centric paradigm is crucial for organizations aiming to maximize the value of their AI initiatives.

The Pivotal Role of Data in AI

Andrew Ng from Stanford emphasizes that AI systems are composed of code and data, with data quality as crucial as the model itself. This realization requires a shift toward data-centric AI, focusing on improving data consistency and quality to enhance model performance. Data is the fuel that powers AI, and its quality directly affects the outcomes.

Data-Centric vs. Model-Centric AI

Historically, the majority of AI research and investment has focused on developing and improving models. This model-centric approach emphasizes algorithmic advancements and complex architectures, often overlooking the quality and consistency of the data fed into these models. While sophisticated models can achieve impressive results, they are highly dependent on the quality of the data they process.

Data centric approach

In contrast, the data-centric approach prioritizes the quality, consistency, and accuracy of data. This paradigm shift is driven by the understanding that high-quality data can significantly enhance model performance, even with simpler algorithms. Data-centric AI involves iterative improvements to the data, such as cleaning, labeling, and augmenting datasets, to enhance model performance. By focusing on data quality, organizations can achieve better results with simpler models, reducing complexity and increasing interpretability.

The Need for Accurately Labeled Data

The need for accurately labeled data is particularly critical in supervised learning. High-quality labeled data ensures that models learn the correct associations and can generalize well to new, unseen data. However, obtaining and maintaining such datasets can be challenging and resource-intensive, underscoring the importance of robust data management practices.

For enterprises, this means investing in data labeling tools, employing data augmentation techniques to increase the diversity and quantity of labeled data, and implementing rigorous data quality assurance processes. Additionally, leveraging automated data labeling and machine learning operations (MLOps) can streamline the data preparation process, reducing the burden on data scientists and ensuring that high-quality data is consistently available for model training.

See also: How External Data Is Revolutionizing Underwriting

The Enterprise Data Strategy

A data-centric approach is essential for enterprises aiming to leverage AI effectively. By prioritizing data quality and adopting robust data management practices, organizations can enhance the performance of AI models, regardless of whether they employ supervised or unsupervised learning techniques. This shift from model-centric to data-centric AI reflects a broader understanding that in the realm of AI, quality data is often more important than sophisticated algorithms.

If data quality is not given equal importance to model quality, organizations risk falling into a trap of perpetually compensating for model noise. When input data is noisy, inconsistent, or of poor quality, even the most advanced AI models will struggle to extract meaningful patterns. In such cases, data scientists often find themselves fine-tuning models or increasing model complexity to overcome the limitations of the data. This approach is not only inefficient but can lead to overfitting, where models perform well on training data but fail to generalize to new, unseen data. By focusing on improving data quality, enterprises can reduce noise at the source, allowing for simpler, more interpretable models that generalize better and require less computational resources. This data-first strategy ensures that AI efforts are built on a solid foundation, rather than constantly trying to overcome the limitations of poor-quality data.

Ensuring that enterprises have access to high-quality, accurately labeled data is a critical step toward realizing the full potential of AI technologies.

5. Potential Generative AI Approaches

Organizations can adopt different approaches to AI based on their strategic goals and technological capabilities. Each approach has distinct data implications that chief data officers (CDOs) must address to ensure successful AI implementations. The three primary approaches are: Taker, Shaper, and Maker.

Taker

The "Taker" approach involves consuming pre-existing AI services through basic interfaces such as application programming interfaces (APIs). This approach allows organizations to leverage AI capabilities without investing heavily in developing or fine-tuning models.

Data Implications:

  • Data Quality: CDOs must ensure that the data fed into these pre-existing AI services is of high quality. Poor data quality can lead to inaccurate outputs, even if the underlying model is robust.
  • Validation: It is crucial to validate the outputs of these AI services to ensure they meet business requirements. Continuous monitoring and validation processes should be established to maintain output reliability.
  • Integration: Seamless integration of these AI services into existing workflows is essential. This involves aligning data formats and structures to be compatible with the API requirements.

Shaper

The "Shaper" approach involves accessing AI models and fine-tuning them with the organization’s own data. This approach offers more customization and can provide better alignment with specific business needs.

Data Implications:

  • Data Management Evolution: CDOs need to assess how the business’s data management practices must evolve to support fine-tuning AI models. This includes improving data quality, consistency, and accessibility.
  • Data Architecture: Changes to data architecture may be required to accommodate the specific needs of fine-tuning AI models. This involves ensuring that data storage, processing, and retrieval systems are optimized for AI workloads.
  • Data Governance: Implementing strong data governance policies to manage data access, privacy, and security is essential when fine-tuning models with proprietary data.

Maker

The "Maker" approach involves building foundational AI models from scratch. This approach requires significant investment in data science capabilities and infrastructure but offers the highest level of customization and control.

Data Implications:

  • Data Labeling and Tagging: Developing a sophisticated data labeling and tagging strategy is crucial. High-quality labeled data is the foundation of effective AI models. CDOs must invest in tools and processes for accurate data annotation.
  • Data Infrastructure: Robust data infrastructure is needed to support large-scale data collection, storage, and processing. This includes scalable databases, high-performance computing resources, and advanced data pipelines.
  • Continuous Improvement: Building foundational models requires continuous data collection and model iteration. Feedback loops should be established to incorporate new data and improve model accuracy over time.

The approach an organization takes toward AI—whether as a Taker, Shaper, or Maker—has significant implications for data management practices. CDOs play a critical role in ensuring that the data strategies align with the chosen AI approach, facilitating successful AI deployment and maximizing business value.

6. Core Components of an AI Data Strategy

An effective AI-driven data strategy encompasses several key components.

Components of data strategy

Vision and Strategy

Aligning data strategy with organizational goals is paramount. The absence of a coherent data strategy is a significant barrier to expanding AI capabilities. A robust data strategy provides a clear framework and timeline for successful AI deployment. This strategy should be revisited regularly to ensure alignment with evolving business goals and technological advancements.

Data Quality and Management

Data quality remains a persistent challenge for AI. Organizations must prioritize sourcing and preparing high-quality data. Employing methodologies like outlier detection, error correction, and data augmentation ensures the data used in AI models is accurate and reliable. Establishing data governance frameworks, including data stewardship and quality control processes, is essential for maintaining data integrity over time.

AI Integration and Governance

Integrating AI into business functions necessitates a comprehensive approach. This involves addressing technical aspects (architecture, data, skills) and strategic elements (business alignment, governance, leadership, and culture). Ensuring robust governance frameworks for data quality, privacy, and model transparency is essential. These frameworks should include clear policies for data access, usage, and protection, as well as mechanisms for monitoring and mitigating biases in AI models.

See also: Can AI Solve Underlying Data Problems?

7. Evolution of Data Architectures in the Context of AI

Future Trends and Considerations

The rise of generative AI introduces new requirements for data architectures. These include seamless data ingestion, diverse data storage solutions, tailored data processing techniques, and robust governance frameworks. As AI technologies evolve, the value of traditional degree credentials may shift toward a skills-based approach, fostering more equitable and efficient job training and placement.

Data Architectures

Modern data architectures must support the diverse and dynamic needs of AI applications. This includes integrating real-time data streams, enabling scalable data storage solutions, and supporting advanced data processing techniques such as parallel processing and distributed computing. Furthermore, robust data governance frameworks are essential to ensure data quality, privacy, and compliance with regulatory standards.

The Evolving Skill Set for the AI Era

As AI becomes more prevalent, the skills required in the workforce will evolve. While technical skills in AI and data science will remain crucial, there will be an increased demand for "human" skills. These include critical thinking, creativity, emotional intelligence, and the ability to work effectively with AI systems. Lifelong learning and adaptability will be essential in this ever-changing landscape.

Workforce Transformation

Organizations must invest in continuous learning and development programs to equip their workforce with the necessary skills. This includes offering training in AI and data science, as well as fostering a culture of innovation and adaptability. By nurturing a diverse skill set, organizations can better leverage AI technologies and drive sustainable growth.

8. Methodologies for Data-Centric AI

To effectively harness the potential of AI, organizations must adopt a data-centric approach, employing methodologies that enhance data quality and utility:

  • Outlier Detection and Removal: Identifying and handling abnormal examples in datasets to maintain data integrity.
  • Error Detection and Correction: Addressing incorrect values and labels to ensure accuracy.
  • Establishing Consensus: Determining the truth from crowdsourced annotations to enhance data reliability.
  • Data Augmentation: Adding examples to datasets to encode prior knowledge and improve model robustness.
  • Feature Engineering and Selection: Manipulating data representations to optimize model performance.
  • Active Learning: Selecting the most informative data to label next, thereby improving model training efficiency.
  • Curriculum Learning: Ordering examples from easiest to hardest to facilitate better model training.

The evolution of data strategy is not just a trend; it's a necessity for organizations aiming to leverage AI for competitive advantage. By prioritizing data quality, aligning AI initiatives with business goals, and establishing strong governance frameworks, organizations can unlock the transformative potential of AI, driving productivity, innovation, and economic growth in the years to come.


Shravankumar Chandrasekaran

Profile picture for user ShravankumarChandrasekaran

Shravankumar Chandrasekaran

Shravankumar Chandrasekaran is global product manager at Marsh McLennan

He has over 13 years of experience across product management, software development, and insurance. He focuses on leveraging advanced analytics and AI to drive benchmarking solutions globally. 

He received an M.S. in operations research from Columbia University and a B.Tech in electronics and communications engineering from Amrita Vishwa Vidyapeetham in Bangalore, India.


Alejandro Zarate Santovena

Profile picture for user AlejandroZarateSantovena

Alejandro Zarate Santovena

Alejandro Zarate Santovena is a managing director at Marsh-USA.

He has more than 25 years of global experience in technology, consulting, and marketing in Europe, Latin America, and the U.S. He focuses on using machine learning and data science to drive business intelligence and innovative product development globally, leading teams in New York, London, and Dublin.

Santovena received an M.S. in management of technology - machine learning, AI, and predictive modeling from the Massachusetts Institute of Technology, an M.B.A. from Carnegie Mellon University, and a B.S. in chemical engineering from the Universidad Iberoamericana in Mexico City.

Embedded Insurance: Challenges and Opportunities

Embedded insurance can delight customers, but outdated systems and data privacy concerns pose hurdles to overcome.

online car rental

For every business, customer satisfaction isn't just a goal—it's the foundation for building long-term relationships and ensuring success. Merchants are finding new ways to satisfy their customers, and embedded insurance is one of them. It can delight customers by offering insurance right alongside merchants' products. 

But while businesses are excited over the growth potential of embedded insurance, there's one industry that's feeling the pressure: insurance. For insurers and insurtechs, the opportunities and challenges of embedded insurance are neck and neck. 

To embrace the shift, they must face these hurdles head-on.

Personalized Customer Experience

Merchants can ace customer experience with embedded insurance, but for insurers, it's not so easy. Here's what's holding them back:

1. Outdated UI/CX

While the world is busy building modern apps, many insurers are still stuck with complex systems due to a lack of resources and complicated regulations. Consumers will have a smooth UI experience until they are redirected to the insurance part. Jargon-filled insurance policies and complex claim processes often frustrate customers. Though insurers have started to make a shift, there's still a lot to be done. Until then, they can use the following strategies to ensure worthy customer experiences:

  • Minimal Integration: Use the merchant's platform to show basic policy information and claim status. For anything complex, redirect to the insurer's website.
  • Dedicated Customer Support: Support customers in navigating the complex platform through email and chat.
  • Third-party Integration: Integrate with third-party digital tools to help overcome the outdated UI/CX hurdle.

Consider a customer renting a car online. The process is quick and seamless, but when the insurance part hits, the experience often becomes fragmented. Insurers can make a difference by letting the rental company offer only basic insurance details online and assisting customers with complex claim processes. Insurers like Geico and Lemonade have already started to improve their customer service with advanced support systems and third-party AI chatbots. Similar solutions can help customers navigate complex processes and enhance their experience.

2. Experience Through Merchants

When insurance products are bundled with the merchant's, ensuring a seamless customer experience falls on insurers. They need to establish clear communication with the merchant to avoid misalignment while framing the terms and conditions for the product. Insurers can improve the customer experience by involving merchants in all their processes:

  • Merchant Support: Have the merchant handle customer inquiries and limit direct customer interaction. For instance, Tesla supports its insurance provider, State National Insurance, by handling most of the insurance queries on Tesla's platform.
  • Co-Branded Marketing: Initiate joint marketing efforts to build brand awareness and trust.
  • Data-Sharing Agreements: Maintain customer insights by creating data-sharing agreements with the merchants.

If you are a travel insurer, you can better use embedded insurance by partnering with merchants like online booking platforms. Imagine a customer booking a flight and being presented with a personalized insurance offer that covers everything from trip cancellations to lost luggage, all without leaving the booking site. This not only enhances the customer experience but also increases the likelihood of a purchase. Additionally, promoting products together with the booking platform makes the insurance offering an extension of the travel booking process, which can help drive more sales.

3. Data Privacy and Customer Knowledge

As embedded insurance involves data-sharing between merchant and insurer, it's the insurer's responsibility to safeguard customer data and educate customers about the terms and conditions. This can help avoid discrepancies in their claim journey. Here's how insurers can do it:

  • Regulatory Compliance: Adhere to data privacy regulations such as GDPR and CCPA.
  • Standard Privacy Policy: Provide clear privacy policies to customers that outline data collection and usage practices.
  • Necessary Customer Education: Offer all the must-know information about the insurance products on the merchant's platform.

Transparency and trust are crucial to customer satisfaction. Imagine the impact of a seamless integrated experience where a customer is provided with all the necessary data about privacy policies and the risks associated with it while booking a flight ticket or getting employee-provided health insurance. This level of transparency not only builds customer loyalty but also strengthens the brand's reputation in an increasingly competitive market. Insurers can enhance the customer experience by keeping them informed, protected, and engaged about how their data is being used.

Enhancing CX With New Insurance Models

The advent of connected devices such as wearables, IoT, and smart home devices has opened a new era of possibilities for the insurance industry. It has led to the introduction of new insurance models that attract users to move from the old way of paying annual premiums. Data generated through connected devices further helps insurers cater to more specific and personalized products.

1. On-Demand Insurance

Is it possible to have insurance only when you need it, no matter how brief its usage? On-demand insurance offers this flexible and cost-effective solution by allowing you to activate coverage exactly when you require it. Consider a taxi service where the service provider offers insurance coverage along with the rent for that specific trip rather than requiring payment for an annual policy.

This model provides the convenience of using insurance for immediate needs and helps avoid unnecessary costs for coverage you don't use. Uber offers its customers and drivers the option to purchase insurance coverage on-demand. This means that both parties can use insurance protection only during the ride. Uber leverages connected devices like GPS and telematics to accurately track their vehicles, ensuring that both riders and drivers have protection precisely when they need it.

2. Contextual Insurance

Ever thought about insurance that adapts to your real-time needs? Contextual insurance makes this a reality by providing coverage based on the user's activities. Imagine you've just installed a smart home system that can analyze data like occupancy patterns, appliance use, and even environmental conditions. This data offers a clearer picture of potential risks in your home, allowing your insurance coverage to adjust.

This approach ensures that you're not paying for unnecessary coverage but are instead getting it precisely when you need it. Google Nest's extended warranty is a prime example of contextual insurance. At the time of purchase, you're offered a plan that not only covers accidental damage but also uses data from your thermostat to offer protection to your home.

3. Pay-Per-Use Model

Why should consumers pay heavy premiums for products they use occasionally? The pay-per-use model offers the flexibility to pay for insurance only when they use the product. It integrates insurance into existing services or products and provides the flexibility to use it only when needed.

Consider buying high-tech products. Merchants offer a standard warranty period for the product, and if customers wish, they can opt for an extended warranty. This extended insurance coverage can also be applied to specific parts of the product, perhaps only for the motor of a washing machine. 

The First Step

Customer experience is not just about keeping customers happy—it's about acquiring and retaining them with minimal costs. That's why many businesses are shifting toward embedded insurance. To stay in line, insurers are relentlessly working to overcome challenges in embedded insurance and achieve the goal of delivering personalized customer experiences. The emergence of new insurance models is a clear sign that they are on the right track.

The future of insurance is undoubtedly embedded, and current progress will benefit insurers, merchants, and, most importantly, customers.

How AI Is Changing Insurance

Despite ethical challenges and high costs, AI is poised to drastically improve risk assessment, customer experience, and operational efficiency.

Google Deep Mind

With capabilities to revolutionize risk assessment, customer experience, and operational efficiency, artificial intelligence is set to unlock significant economic value in the insurance industry.

Fears about using AI ethically can hold insurers back, and the costs associated with building scalable AI solutions may seem daunting, but insurers should rest assured there are several ways to monetize these services.

The insights below explore how AI will enhance industry practices and how insurers can balance development costs and navigate regulatory challenges to ultimately shape the future of insurance.

How AI can create significant value in coming decades

AI technology will revolutionize the insurance industry by enhancing risk assessment, improving fraud detection, and automating claims processing, leading to more precise pricing and reduced costs. It will enable personalized customer experiences through chatbots and predictive analytics, fostering better engagement and loyalty. AI will also streamline operations through robotic process automation, freeing resources for strategic tasks and driving efficiency. Moreover, AI will facilitate new business models like usage-based insurance and peer-to-peer platforms, catering to evolving consumer preferences and opening new revenue streams. These advancements will generate significant economic value and drive industry growth. Some examples include:

  • Allstate's AI Chatbot "ABIE": Allstate uses an AI chatbot named ABIE (Allstate Business Insurance Expert) to assist small business owners in selecting appropriate coverage. The chatbot provides instant, personalized insurance quotes based on user inputs and real-time data analysis.
  • Lemonade's Fraud Detection: Lemonade, a digital insurance company, employs AI to detect fraudulent claims. Their AI system, "Jim," reviews and processes claims in seconds, cross-referencing data points and flagging suspicious activity, leading to lower fraud rates and faster claim resolutions.
  • Progressive's Snapshot Program: Progressive Insurance's Snapshot program uses AI and telematics to monitor driving behavior. Policyholders receive personalized discounts based on their driving patterns, promoting safer driving habits and reducing the likelihood of accidents.

How AI service providers can monetize services

GenAI costs are concentrated in foundational model development, which involves significant investments in R&D, computational resources, and talent. These models are then integrated into platforms requiring robust infrastructure and API development. Service providers customize and fine-tune these models for specific industries, incurring additional costs for scalability solutions. Businesses access AI capabilities through subscription or licensing, incorporating AI into their products and improving operational efficiency. This value is ultimately passed on to end-users through enhanced products and services, ensuring cost recovery and profitability for AI service providers. AI service providers can balance infrastructure and development costs by:

  • Subscription Models: Offering tiered plans for different business sizes.
  • Usage-Based Pricing: Charging based on actual usage, similar to cloud services.
  • Value-Based Pricing: Charging a percentage of savings or earnings generated by AI solutions.
  • Partnerships: Integrating AI into broader platforms through revenue-sharing agreements.
  • Industry-Specific Solutions: Creating tailored AI applications for specific industries.
  • Data Monetization: Selling anonymized data and providing market analytics.
  • Consulting Services: Offering implementation and support services for AI integration.
  • Proprietary Tools: Developing advanced, proprietary AI platforms with premium features.

How regulatory and legal challenges will affect AI

Regulatory and legal considerations will increase compliance costs and slow innovation. Companies will face stricter data privacy and security requirements, accountability demands for transparent and fair algorithms, and potential intellectual property disputes. Liability concerns for AI system failures will necessitate comprehensive insurance, while ethical considerations will require careful navigation. Regulatory hurdles can create market entry barriers, especially for startups, and differing regulations across countries can complicate international operations.

We must strike a balance between compliance and innovation to harness the benefits of AI while mitigating potential risks. The insurance industry should continue to collaborate with AI experts and regulators to establish a code of conduct specific to the sector. By implementing an industry-specific code of conduct, insurance companies can have greater assurance that they are not subject to flawed decision-making processes resulting from unethical AI practices, and greater confidence in applying this technology to routine processes.

The integration of AI in insurance reflects a significant shift that will redefine industry standards and consumer expectations. From enhancing fraud detection to personalizing customer interactions, the impact will be far-reaching. As regulatory frameworks develop, companies must navigate complexities to harness AI's full potential. Embracing AI will drive growth and ensure competitive advantage in this increasingly digital and fast-paced environment.


Leandro DalleMule

Profile picture for user LeandroDalleMule

Leandro DalleMule

Leandro DalleMule is general manager, North America, at Planck.

He brings 30 years of experience in business management to the team. Prior to Planck, he spent six years as AIG's chief data officer. He also was the senior director of big data analytics for Citibank and head of marketing analytics for BlackRock and held leadership roles in Deloitte's advanced analytics practice.  

He holds a B.Sc. in mechanical engineering from the University of Sao Paulo, Brazil, an MBA from the Kellogg School of Management, and a graduate certificate in applied mathematics from Columbia University.