Download

An Interview with Kyle Evancoe

In this month's ITL Focus, Paul Carroll and Kyle Evancoe, VP of sales at InvoiceCloud, explore the evolution of insurance claims.

Interview Banner

Kyle Evancoe 

Kyle Evancoe, vice president of sales at InvoiceCloud, is a proven SaaS leader helping insurance carriers simplify complex payment processes and digital interactions to drive improved customer satisfaction and financial results with next-generation inbound premium collection and outbound disbursement solutions.

 


Paul Carroll   

From where I sit, claims is one of the hottest topics around. I'm hoping you’ll start us off by explaining why. What’s going on? 

Kyle Evancoe 

Primarily, as it did with so many other industries, the pandemic greatly accelerated claims transformation. Overnight, carriers were forced to rapidly implement some level of digital solution because nobody could go into the office. We did, however, hear a number of stories that firms were sending people to the office to deal with claims checks, further emphasizing the need to change the systems that in most cases had been in place for decades.

The expectation is that the consumer experience would be Amazon-like, even though insurance operations all have workflow complexities and legacy systems to manage through.

The recent hardened market and inflation, as well as the enormous impact of catastrophic losses, are further ratcheting up the pressure to reduce churn and maximize savings overall. All this forces claims further to the forefront of carriers’ minds.

Paul Carroll   

Drilling down, what are the main things you think people ought to focus on when they try to figure out ways to improve the servicing of claims?

Kyle Evancoe   

44% of people conduct research on what the claim experience is like with a carrier prior to even signing up to do business with them. Policyholder experience is at the top of the list for many.

I’ll give you a personal example. Within my family, we just had an auto claim. Carpets in the SUV were destroyed by an open sunroof and a particularly determined thunderstorm. With the significant water damage, we had to call the insurer, as that was the easiest way for us to get in touch with them. We then had to fight for an adjuster to get out there within a week and a half. The claim was extended three months due to parts being on back order, so we had to wait. When we went to get the rental car, the CSR [customer service representative] had booked us a small sedan for a family of four and two 100-pound dogs.

Three months pass, and we wait to receive the check in the mail. Well, wouldn't you know it, there's a lien holder, so now we've got to drive an hour and 40 minutes to a Walmart that had the branch of the credit union just to get a signature, a process they weren't familiar with and had to figure out. We pack everyone back into the car, drive to the branch, cash the check, and are finally able to pay for the claim.

That's the state of what many are looking at now from a regular personal lines auto claim.   

This example is personal to me, and, yes, it's frustrating, but it's relatively benign. From an experience perspective, we should be thinking about the worst case. These are the stories we hear every day from carriers regarding the worst moment in someone's life: a family without a place to stay due to a catastrophe, the business owner who loses the retail space and all their inventory via hurricane, tornado, fires, flooding or whatever it may be. The expectation in that moment is the claimant has paid their premium for years and the insurance carrier is now going to provide the lifeline that's going to help them protect what's most dear. That may mean getting them food, or a place to stay for the evening or even inventory to get back up and running. Whatever the need, this is the moment when the insurance companies are expected to roll in like the white knight.

So there needs to be an effortless way for the policyholder to reach the carrier and an accelerated way for that carrier to accept, review and pay a claim. That includes a quick, if not instant, way to get funds to customers. For instance, if the bank isn’t open (due to a catastrophe) customers need digital options, right? 

Improving the various aspects of the “claims process” are all steps in the right direction, but it all boils down to the fruition of the claim: getting the payment out to the customer in a way they can use, as quickly as possible.  

Paul Carroll   

In my experience, every dog thinks it's a lap dog, and I'm sure your 100-pound dogs loved the idea of climbing into a little sedan with you and your family.  

The persistence of manual process in insurance has puzzled me for a long time. A mantra of mine for years has been, "Let's burn all the fax machines." Now I'm going to start calling for burning all the checkbooks, as well. Why is it taking so long to get us from manual payments to digital payments?

Kyle Evancoe   

The crux of the issue for many is the pervasive paper-based processes most insurers still have in place today. Evolving those processes means we’re talking about changing or upgrading core systems and other technologies, workflows, staffing and instituting change management. These types of total transformations take a ton of time, money and resources across many departments to be successful. This is why 60-plus percent of carriers are still only able to make checks available for claims payments.

However, many are realizing the need now to make changes where they can and with the greatest impact, while deciding on their eventual path forward. The right payments partner (I may be biased here) can help a carrier quickly improve both the internal operational efficiencies and external experiences in months, not years. As I mentioned before, we can also act as a layer of protection for those experiences while larger-scale transformation occurs in the background, with folks external to the carrier none the wiser.

Paul Carroll   

Do individuals prefer checks?

Kyle Evancoe   

Paper checks are definitely not the preferred experience overall. Our own research in this year’s State of Online Payments Report shows that, of those paying online, 73% would prefer to receive claims payments directly into their bank account as opposed to 14% preferring paper check.

People want options: everything from direct debit and traditional ACH to virtual cards, and, yes, there are still those wanting a paper check. We are finding that people are adopting the more instant options readily and believe they will quickly become the standard.

Most people are looking for inline communication on a claim. They want to be able to receive a text letting them know their payment is available. They also want to be able to go into the text and not have to sign in. We find that having to develop logins and passwords and things like that -- we call them login walls -- are one of the major blockers, leading to delayed payment acceptance and poor experience. From our perspective, we've developed through that and can pre-authenticate people based on several factors. Clients can go right in, pick their method of payment and get their money.

Paul Carroll   

Why is all this change happening now? 

Kyle Evancoe   

The insurance industry has lagged on technology as compared with other industries due to a vast amount of technical debt and inflexibility of legacy systems. Much of the initial change over the last several years went into policy and underwriting, refining the business that was being written. Now, all the pressures to create efficiencies and cost savings I mentioned earlier, and to provide an Amazon-like experience that balances technology and the human touch, are driving a focus on transformation around claims.

Beyond experience, insurance companies are realizing that $6 a check -- which is the average cost of sending out a check when you build in processing, clearing, reconciliation, all those other aspects -- directly reduces the bottom line.

Paul Carroll   

What are the key technology threads that you’re focused on as you try to make payments more efficient and drive digital adoption? 

Kyle Evancoe   

From an experience point, continuing to evolve self-service options throughout the claims process will allow carriers to meet their customers where they are. Providing them greater transparency throughout the process, from filing a claim through choosing how they want to receive their money, will increase overall adoption of digital options, allowing everyone to experience the downstream benefits.

Real-time options are a focus of ours, and the true SaaS nature of our platform and long-term relationships with our processing partners allow us to rapidly adopt and make available new and faster payment technologies as they come out. These drive further digital adoption by policyholders.

Finally, we can integrate and work with ever-evolving core and ancillary technologies throughout a carrier’s tech stack as they drive greater digital transformation. Being true SaaS allows us to already be the outlier in many of the complaints we hear in the market about vendors running over implementation estimates or even being unable to implement due to obscene backlogs. This is a core strength of ours that we will continue to invest in as our current and new core system partners continue to evolve their platforms.

Paul Carroll   

Would you project out a bit and describe what an idealized claim experience could be? I’m guessing you don’t think you need all the grief you experienced with your lap dogs and the ruined carpet in your SUV. 

Kyle Evancoe   

At the end of the day, two things provide the perfect experience. The carrier needs to process and close out the claim quickly, and the payment needs to get into the customer’s hand quickly.

My personal ideal claims experience, as much of our research points to, is mobile-based. I or anyone else will have the ability to report a claim via an app and quickly take some pictures to submit. Then the claim will be processed quickly and a decision made via AI or via an adjuster who is going to step in and take a quick look. I’ll have the ability to see where my claim is in the process and get that claim closed in a day, or something much quicker than what we see today. I also will have the ability to then receive a payment in line with the CSR agent or claims adjuster that I'm speaking with, possibly right from a text experience. Finally, I’ll have the ability to choose my instantaneous payment method.

The differentiators for carriers will be in how these things are accomplished. For example, continuing the personal auto example with a lienholder, a virtual payment option would need to have functionality and a network that allows them to sign off on my payment digitally, without me needing to hunt down the right branch or wait for a wet signature on a check.

Paul Carroll   

Do you have any other thoughts or things we might not have brought up yet?

Kyle Evancoe   

As carriers continue to focus on claims, due to the experience and cost-saving benefits, they are realizing that the experience funnels down to the point of payment and the speed at which they can provide the options their customers want.

Paul Carroll   

Here’s hoping. Thanks, Kyle. This has been a great conversation. 


Insurance Thought Leadership

Profile picture for user Insurance Thought Leadership

Insurance Thought Leadership

Insurance Thought Leadership (ITL) delivers engaging, informative articles from our global network of thought leaders and decision makers. Their insights are transforming the insurance and risk management marketplace through knowledge sharing, big ideas on a wide variety of topics, and lessons learned through real-life applications of innovative technology.

We also connect our network of authors and readers in ways that help them uncover opportunities and that lead to innovation and strategic advantage.

How NOT to Handle New Technology

Rite-Aid's disastrous rollout of facial recognition offers lessons to insurers as they try to figure out how to use generative AI.

Image
New Technology

My mantra for innovation has for decades been, "Think Big, Start Small, Learn Fast." Among the many possible ways to get that wrong, large companies seem to go with, "Think Big, Start Big." 

If they're lucky, they just wind up wasting money when an innovation, inevitably, doesn't roll out as envisioned. Less lucky companies also alienate a significant chunk of their customers by presenting them with some clunky new product or process. Really unlucky companies also wind up in legal trouble. 

Well, Rite-Aid hit the trifecta when it broadly rolled out facial recognition technology that was designed to prevent shoplifting. It lost a bunch of money, ticked off a host of customers and wound up in trouble with federal authorities, including for racial bias.

The nuances of facial recognition bear some resemblance to what insurance companies are finding as they try to find breakthroughs with AI, so it's worth understanding Rite-Aid as a cautionary tale.

Let's look at Rite-Aid to see what NOT to do.

Facial recognition technology, returns a lot of false positives. It also has inherent racial bias, because the technology does a much better job of recognizing the faces of white people than of Black people or Latinos. And that's today's generation of technology -- the latest and greatest version. 

Yet Rite-Aid began rolling out the technology way back in 2012, and it did so to hundreds of stores, not in some sort of tightly controlled pilot. 

Like many "Think Big, Start Big" mistakes, Rite-Aid's was driven by a combination of a compelling business need and the promise of a breakthrough technology. 

Rite-Aid, like all pharmacies, has a real problem with shoplifting, a high percentage of which is committed by a small number of people. So you can imagine why Rite-Aid would be susceptible to a sales pitch about magical technology that would let it know when people convicted or suspected of theft entered a store. 

In any case, Rite-Aid jumped right in. It built a data base and had the technology alert employees when someone thought to be a match walked into the store. An employee would then follow the person around or call the police, even if they hadn't seen a crime committed. 

The problems should have been obvious to Rite-Aid. According to the Washington Post:

"Huge errors were commonplace. Between December 2019 and July 2020, the system generated more than 2,000 'match alerts' for the same person in faraway stores around the same time, even though the scenarios were 'impossible or implausible,' the [Federal Trade Commission] said. In one case, Rite Aid’s system generated more than 900 'match alerts' for a single person over a five-day period across 130 stores, including in Seattle, Detroit and Norfolk, regulators said."

Once, "employees called the police on a Black customer after the technology mistook her for the actual target, a White woman with blond hair," the article said. "The system generated thousands of false matches, and many of them involved the faces of women, Black people  Latinos."

Rite-Aid, which reached a settlement with the FTC last week that includes a promise not to use facial recognition technology for five years, said it was merely conducting "a pilot" test and noted that it ceased using the technology in 2020. The company filed for bankruptcy protection in October because of losses and opioid-related lawsuits and is shrinking and reorganizing.

Insurance companies would do well to keep Rite-Aid in mind as they consider the opportunities presented by generative AI. The temptations might be more manageable in our industry, because we don't have as defined a problem as shoplifting that could be made to simply disappear if generative AI works perfectly. But temptations are still there.

Whatever you're considering, you should certainly think big, because big opportunities are out there to be had. But start small. Think Big, Start Big is a recipe for failure, even if you don't end up with the FTC knocking on your door with an armful of subpoenas.

Cheers,

Paul

P.S. If you're interested in reading more about Think Big, Start Small, Learn Fast, my friend and frequent co-author Chunka Mui lays out our rubric here.

 

 

 

5 AI Trends You Can't Ignore in 2024

From enhanced computing power for AI to its ethical implications in creative endeavors, there are both opportunities and challenges.

An artist’s illustration of artificial intelligence (AI)

With 2024 basically at our doorstep, the landscape of artificial intelligence (AI) has evolved tremendously over the past year, heralding transformations across various industries, including insurance. This rapid evolution is marked by a series of disruptive AI trends that are set to redefine the way businesses operate.

From enhancing computing power for advanced data processing to the ethical implications of AI in creative endeavors, these trends present both opportunities and challenges. In the insurance sector, the impact of these advancements ranges from improved risk assessment and efficient claims processing to navigating new regulatory landscapes.

We at InclusionCloud have explored these pivotal AI trends and have identified their implications and the critical considerations for the insurance industry as it stands on the cusp of a technological revolution.

Trend #1: Elevated Computing for Advanced AI

The complexity of AI models is rising dramatically, demanding more robust computing resources than ever before. This advancement in computational power is essential to manage the extensive data processing these sophisticated AI models require. In sectors such as insurance, this translates into quicker, more precise data analysis, enhancing risk assessment and streamlining claims processing, thereby revolutionizing efficiency and accuracy in decision-making.

Trend #2: AI's Foray Into Creative Realms

AI is carving its niche in creative fields, offering innovative tools like DALL-E 2 and ChatGPT that redefine problem-solving and solution design. This incursion is transforming areas ranging from graphic design to copywriting, ushering in a new era of creativity. However, the ethical dimension of AI in creativity, particularly in the context of deep fake potential, calls for stringent guidelines and transparent use to ensure responsible application in these creative ventures.

See also: AI: Beyond Cost-Cutting, to Top-Line Growth

Trend #3: AI as a Collaborative Workforce Ally

Beyond mere automation, AI is integrating into the workforce as a complementary tool to human skills. Applications like Microsoft Copilot are becoming invaluable in saving time for employees, allowing them to dedicate efforts to more important tasks. In the insurance industry, such tools can revolutionize administrative duties by aiding in tasks such as drafting personalized customer communications, scheduling management and client data organization, thereby enhancing operational efficiency.

Trend #4: Navigating the AI Regulatory Landscape

Governmental bodies are now stepping up to frame regulations, ensuring responsible AI usage. For insurance companies, this evolving regulatory environment necessitates a careful approach to harness the benefits of AI while adhering to legal standards. Navigating these laws is crucial for deploying AI ethically and avoiding potential legal complications, marking a pivotal aspect of future AI integration strategies.

Trend #5: Synthetic Data: A New Frontier in AI Training

The use of synthetic data in AI model training is gaining momentum, particularly in sensitive sectors. Predictions indicate that by 2024, a significant portion of AI development data will be synthetic. For the insurance industry, this trend could revolutionize risk assessment and fraud detection models, enhancing accuracy while upholding customer privacy. This shift toward synthetic data represents a significant stride in data management, balancing innovation with privacy concerns.

See also: 5 Ways Generative AI Will Transform Claims

Conclusion

AI is just starting to show its true potential. This technology is not just an incremental change; it's reshaping entire industries, redefining roles and creating new paradigms in business and society.

But with all this exciting change, we also need to be careful. The path of AI disruption is laced with complexities. The ethical implications, including concerns about privacy, job displacement and the potential for biased algorithms, require careful consideration and management. 

As these technologies become more integrated into our professional and personal lives, it's crucial to proceed with a balanced approach to maximize benefits while mitigating risks.


Nicolas Baca-Storni

Profile picture for user NicolasBacaStorni

Nicolas Baca-Storni

Nicolas (Nick) Baca-Storni is the chief revenue officer at InclusionCloud, focusing on digital transformation and IT outsourcing.

His 15-year tech industry tenure includes establishing key partnerships with industry leaders such as Salesforce and ServiceNow and engaging in strategic development and innovation.

How Wearables Can Improve Worker Safety

Wearable devices can warn workers of dangerous situations, helping them stay safe while lowering workers' comp claims.

Person Wearing White Silicone Strap Black Smartwatch

Wearable technology has proven useful for tracking employee health and monitoring workplace conditions. The devices can lead to healthier, happier and more productive workforces, which could significantly reduce workers’ compensation claims.

What is the potential of wearable technology? How much can it affect workers’ comp and risk management? Here’s everything you need to know about the current situation and coming challenges of wearable tech integration in the workplace.

How Wearable Technology Improves Worker Safety

Wearable technology primarily functions through microprocessors connected to the internet that collect and sync data with other electronics. It comes in many shapes and sizes, from small rings to helmets and other protective equipment. Here are some common examples of wearable devices in various industries:

  • Smart jewelry: Smartwatches, wristbands, rings and necklaces track many key health metrics, including steps, calories burned, resting heart rate, blood pressure, sleep quality and even stress levels.
  • Smart clothing: Smart clothing covers a larger area of the body, giving employers more insights into the health of their workers. There are smart shoes that monitor walking and running form, socks that can detect developing foot lesions and t-shirts that monitor breathing patterns and muscle tension.
  • Exoskeletons: Workers in construction, warehousing and other dangerous industries can wear upper-body exoskeletons that improve mobility, reduce discomfort and lower the risk of injuries.
  • Head-mounted displays: HMDs use virtual or augmented reality to immerse the wearer in a 3D online environment. This function is extremely useful for training employees in technical fields such as nursing and metal fabrication that have little margin for error.
  • Implantables: Implantable monitoring devices are now available in microchips that allow constant supervision of employees with underlying health risks, such as seizures and asthma attacks. Automated supervision leads to faster reaction times in an emergency.

All these devices play the same important role — tracking employee health and preventing accidents through biometrics. Sometimes, this role is as simple as fixing an employee’s posture, while other times it requires more detailed health information. The greatest example is identifying and addressing the causes of chronic health conditions.

According to the Centers for Disease Control and Prevention (CDC), chronic conditions such as obesity, heart disease and diabetes make up 90% of all healthcare costs in the U.S. These issues largely stem from unhealthy lifestyles and work habits. Wearable technology can help employees eliminate their bad habits and create a strong health and safety culture.

A 2019 systematic review of these devices found people who used them became more physically active, and wearables were more effective than traditional lifestyle interventions. The reason is simple — the appliances keep people informed, hold them accountable and make their health a top priority.

Wearable technology also has a strong psychological impact. These devices act like digital doctors and personal trainers, helping users make informed health decisions and providing constant encouragement. As all risk management professionals know, raising awareness is the first step in addressing any workplace hazard.

Wearable technology fills another key role — data organization. Growing businesses must keep employee health info on a central online platform to ensure fast and accurate insights when new health risks arise. Wearable devices can connect to any credentialed worker’s computer or mobile phone in a customized format.

See also: A 20-Year Outlook for Employee Benefits

Impact on Workers’ Compensation Claims

The information wearable technology provides can affect workers’ compensation in several big ways. These devices can help companies identify hazards through predictive analytics and algorithms and prevent accidents from happening in the first place. Fewer accidents equal fewer claims.

For example, wearable proximity sensors know when employees are working with or around heavy machinery, dangerous chemicals, unstable platforms and other dangerous areas. They can also identify the presence of workers nearby. Making employees aware of these hazards is crucial for avoiding accidents and injuries.

A 2021 study of wearables in the hospitality industry showed a 50% to 60% decrease in injury frequency in hotel environments with high rates of strains and sprains. The devices limited the frequency of risky employee behaviors and helped them avoid dangerous situations in the first place. The devices act as a form of proactive workers’ comp, rather than reactive.

If an accident happens to occur, the real-time data collection of wearable devices leaves little room for interpretation during the investigation. They can identify if the employee was practicing unsafe behavior, if a specific workplace hazard was to blame or if the injury was just a freak accident. All three outcomes are accounted for.

In other words, wearable devices provide the necessary documentation to settle workers’ comp claims before any legal action is taken. It streamlines claims processing by providing objective data-driven assessments of both parties. By showing who or what was truly at fault, wearables contribute to more accurate injury assessments and financial compensation, if necessary.

Wearable technology can help companies facilitate more effective rehabilitation programs for sick and injured employees. Employers know the precise reason for the accident, so they know which workplace policies or training modules to update. Wearables can also monitor the employee’s rehabilitation, give health advice and maintain morale.

Potential Returns of Wearable Technology

The health and safety benefits of these appliances are impressive, but are they worth the investment? The answer depends on a few factors, starting with the devices' costs. Smaller offerings like smartwatches and rings usually cost a few hundred dollars each, but a more advanced device can be a significant investment.

For example, the latest Apple Watch Ultra 2 — designed for outdoor work environments — has a price tag of $799. A full-body exoskeleton for high-risk working conditions costs between $70,000 and $85,000, while smaller exo-vests cost around $7,000. The latest VR and AR headsets from Apple and Microsoft are expected to retail for $3,500 in 2023.

Of course, the down payments on each device are just the beginning. You must also pay for subscriptions to health and wellness applications, employee training and long-term maintenance. Wearable devices might help people avoid accidents on the job, but they can still get damaged in the process.

Next, you need to weigh these costs against the potential returns. Wearables might be expensive to install and maintain, but they can lead to great long-term savings in risk management. With fewer illnesses and injuries, less money goes toward clean-up efforts, new training materials and rehabilitation programs.

Research from the Journal of Medical Research also indicates that wearable monitoring can lead to yearly savings between $500 and $1,000 per employee. These savings come from fewer hospital visits, improved medication adherence and better chronic disease management. One device maker claims a 50% decrease in claims costs and a 72% decrease in missed work days.

Investor interest is usually a good indicator of a product’s cost-effectiveness, and the market for industrial wearables is projected to cross $8.63 billion by 2027, with an annual growth rate of 15%. This growth is a good sign, though not a guarantee that wearables will pay off in the long run.

See also: Highlights on New Workers' Comp Rules

Challenges and Concerns

Although the reported annual savings and decreases in workers’ comp claims look promising, there is a problem with these statistics. Wearable monitoring might be able to save companies up to $1,000 per employee, but how long will the benefits take to accumulate? It’s true that wearables promote healthy habits and lifestyles, but these changes don’t happen overnight.

For example, a 25-year-old warehouse worker who eats poorly and puts his body at risk won’t feel the effects of his unhealthy lifestyle for many years. He seems like the perfect candidate for a wearable fitness monitor, but how long would it take to see the positive effects? There is no magic number, but it takes an average of six months to solidify a new health routine.

The wellness claims share the same core problems. The evidence — while compelling — is largely anecdotal and doesn’t cover a long enough timeline. It may take a business years to see tangible returns from investing in wearable monitoring devices. Additionally, the benefits are bound to vary because the employees will use the devices differently.

Aside from the uncertain claims about a strong ROI, another great challenge facing the growth of wearable technology is its reliability. Accuracy can widely vary by developer and give inconsistent results. Most models provide accurate step counts and heart rate measurements, but more difficult metrics such as breathing patterns and sleep quality aren’t so dependable.

Inaccurate data is just one potential technical issue. The devices can also trigger false alarms. For example, an employee who has asthma could accidentally activate their smart shirt’s respiratory monitor after coughing. False alarms lower the workforce’s sense of urgency and might cause a delayed reaction in a real emergency.

Businesses must also be aware of any updates with the Occupational Health and Safety Administration. Certain wearables that qualify as medical devices could become required in the near future. Cardiovascular and respiratory devices are the most likely to become mandatory, especially in the healthcare industry.

As with any new monitoring system, employees might have some privacy concerns. People naturally don’t like the idea of constant monitoring and data collection on the job. However, brands can alleviate these concerns by being transparent. Improving cybersecurity and establishing limits on the use of personal data are key steps in wearable technology integration.

See also: Top Employee Incentive Trends for 2024

Future of Wearable Technology Still Uncertain

There is strong evidence wearable technology can reduce workers’ compensation and healthcare costs. Market activity is ramping up, and wearables are becoming more commonplace in high-risk industries. However, the future of this technology still has some uncertainties. Consider the financial, legal and ethical implications of a long-term investment before making the leap.


Jack Shaw

Profile picture for user JackShaw

Jack Shaw

Jack Shaw serves as the editor of Modded.

His insights on innovation have been published on Safeopedia, Packaging Digest, Plastics Today and USCCG, among others.

 

How to Succeed at Data Modernization

A well-crafted strategy must be rooted in business goals, driven by updated processes and systems and supported by sound data management.

Close-up Photo of Survey Spreadsheet

In today's rapidly evolving insurance industry, data is the lifeblood that fuels business growth.

Insurers must harness the power of data and analytics to inform decision-making, drive innovation and maximize competitiveness. Embarking on the journey toward data modernization is therefore essential to staying ahead -- or just keeping up -- and a well-defined data and analytics strategy is a critical first step.

See also: 6 Steps for Cultivating a Data Culture

Building the Foundation: Strategic Alignment

The strength of any data and analytics strategy depends on alignment with the overarching business strategy. Without such alignment, it can be challenging to secure funding and assemble the right resources to drive data modernization. Insurance companies must clearly demonstrate how prioritizing and investing data and analytics capabilities translates into tangible benefits for the organization and serves several key purposes:

  • Drives profitability: The insurance industry is inherently data-centric, and optimizing data's potential to achieve business goals can provide a competitive edge. By ensuring a clear path to desired outcomes, insurers can identify areas that require investment, engendering confidence in the decision-making process. Furthermore, emerging trends can be identified early, enabling the company to stay ahead of the curve.
  • Maximizes efficiencies: Often, insurance companies focus too much on accessing data and building data infrastructure and not enough on developing insights. Shifting to a results-oriented focus on actionable insights can dramatically improve efficiency by identifying resource-intensive processes, removing bottlenecks and embracing a source-of-truth philosophy to ensure data accuracy.
  • Manages costs: The adoption of cloud infrastructure can help reduce run times, improve technology management and consistency and lead to substantial cost savings. Real-time cost monitoring provides transparency and allows for dynamic adjustments to resource allocation, enabling cost-efficient utilization.
  • Increases innovation: By leveraging cutting-edge data and analytics techniques, insurers can develop new processes, enhance their capabilities and use novel data assets. Innovation should not be pursued for innovation's sake, but always directed toward the organization's business strategy and long-term goals.

Moving Forward: Next-Generation Data and Analytics

After establishing organizational alignment around strategy and securing corresponding investments into platforms, processes and people, insurers are equipped to modernize their data and analytics programs. This modernization includes three primary elements:

  • Shorter analytics platform lifecycles: The modern data environment demands a shift in expectations regarding platform lifecycles. Instead of expecting systems to last for decades, organizations should plan and invest for three to five years to allow for agility and quick adaptation to emerging technologies. This requires developing clear definitions around the purpose and interaction points of these platforms, as well as driving business requirements for enhancements.
  • Third-party data integration: The integration of third-party data is becoming increasingly critical. It is essential to create an environment for managing external datasets and connecting them with internal ecosystems. This includes addressing legal considerations and ensuring that data acquisition strategies benefit multiple functions within the organization.
  • End-to-end business process integration: Above all, a data and analytics program needs to work, so ensuring interoperability between systems is vital. This involves driving platform uniformity, addressing delays in data flow, enhancing security and clarifying data ownership models across the entire data lifecycle.

See also: Why Becoming Data-Driven Is Crucial

Progressing Prudently: Data Management

Data management underpins the effectiveness of any data and analytics program. While embracing a new era of data-enabled products and processes, insurers must ensure they proceed prudently, giving due attention to data governance, ownership and ethics:

  • Data governance: Effective governance is the cornerstone of data management and ensures compliance with internal, regulatory and contractual requirements. Using systems that enable data usage and lifecycle management is crucial for maintaining data quality and integrity and compliance.
  • Data ownership: This should be approached strategically, starting with user access management and documentation of roles, responsibilities and data flows. In applying the enterprise governance and ownership model locally, mid-level managers play a vital role in translating enterprise expectations to make them more consumable for local users and in line with regulatory and contractual requirements.
  • Model ethics: Ethical considerations should be foundational to any model. As data sources continue to expand and analytics become more advanced, it is paramount for insurers to develop and adhere to enterprise model ethics guidelines that are aligned with their values and goals.

Conclusion

For today's insurer, data modernization is no longer an option but a necessity. It starts with a well-crafted data and analytics strategy -- rooted in business objectives, driven by updated processes and systems and supported by sound data management. By constructing a well-defined strategy around these pillars, organizations can navigate the complex terrain of modernization and position themselves at the forefront of the industry's data revolution.

Blind Spots in Catastrophe Modeling

When adjusting catastrophe models for current and future climates, consider whether unquantified risks could be lurking in the tail.

Scenic View of Frozen Lake Against Blue Sky

Climate change is altering the frequency and severity of extreme weather events such as wildfires, floods and windstorms. Over the last few years, it has become increasingly common for insurers to adjust catastrophe model output to reflect these changes, driven in part by new regulatory requirements. 

While most of the focus has been placed on methods for adjusting frequency-severity relationships, little attention has been given to scenario completeness, particularly in the tail of the distribution, where some of the most severe impacts are expected to materialize. As a result, insurers could be underestimating the physical risks to which they are exposed.

See also: How AI Can Help Insurers on Climate

The dominance of short return periods

When it comes to physical climate change impacts on the insurance industry, one of the most cited research papers is Knutson et al. (2020), which presented a synthesis of the expected changes in global tropical cyclone activity for a 2°C warming scenario. Many insurers have used this paper as a basis for adjusting the frequency and severity of tropical cyclones in catastrophe models to quantify the effects climate change could have on insurance claims. 

But if you have ever applied the Knutson et al. frequency-severity adjustments to a catastrophe model, you might have noticed something that at first seems counterintuitive: After the correction, short return period losses tend to increase more percentage-wise than those in the tail of the distribution. Figure 1 shows the impact of a 20% increase in the number of Category 4 and 5 landfalling hurricanes in a U.S. tropical cyclone model. The largest effect is seen near the bottom of the exceedance probability (EP) curve, with a 15% increase at the one-in-two-year return period loss. In comparison, tail losses around the one-in-200-year return period only increase by 5.5%.

The reason for this inconsistency is simple: Short return periods compose most of the loss distribution, so adding in events fattens this portion of the EP curve the most. In a 100,000-year simulation, for example, 90% of the years are at or below the one-in-10 return period. This means that if a new event is included at random, it has a 90% chance of being added to a year at or below the one-in-10.

This effect is counterintuitive because we know that the upper tail of the distribution is likely to contain some of the most severe physical consequences of climate change, particularly under higher emission pathways. Does it, therefore, stand to reason that the percentage increase in tail return periods should be lower compared with shorter return periods? What could we be missing from our scenarios that explains this?

Bar graph

Figure 1: The percentage change in losses for a 20% increase in the number of Category 4 and 5 landfalling hurricanes in a U.S. tropical cyclone model

Unquantified tail risks 

As Nassim Taleb has written in relation to financial markets, traditional models do not handle fat-tailed events well. This limitation means that important things are likely to be missing from the view of risk. The same is true for traditional catastrophe models when it comes to climate change: While frequency-severity distributions can be conditioned for various climate states, they underestimate the true tail risk, especially when we start to think about things like tipping points, feedback loops and systemic risks. 

This is not to diminish the usefulness of catastrophe models. They combine detailed hazard modeling, engineering knowledge and information on exposures in ways that other tools, such as climate models, cannot. However, just as insurers evaluate and quantify non-modeled risks today, for example under Solvency II, they need to deploy the same thinking and methodologies when it comes to climate change adjustments and scenarios. 

For example, traditional risk assessment methods often focus on a single hazard at a time, but research indicates that the likelihood of hazards co-occurring – such as extreme winds and precipitation – will increase in a warmer world. An increase in cross-peril correlation will lead to increased tail risks, but most insurers are not currently considering this possibility in their modeling. 

There is also mounting evidence that some tipping points, such as the collapse of ice sheets or the melting of permafrost, may be triggered at a global mean temperature of 1.5°C. Such events would have far-reaching ramifications, with severe consequences not just for the insurance industry but society as a whole. The world is expected to reach 1.5°C at some point in the 2030s, meaning some of these fat tail outcomes could be closer than many realize.  

In addition to these direct physical risks, there are also several indirect effects that are frequently overlooked, including supply chain disruption, food insecurity, geopolitical conflict and infrastructure failure. All of these have the potential to manifest as systemic effects, which will stress global economies. 

See also: Glimmers of Good News on Climate (Finally)

Climate Amplification Factors

The breadth and complexity of climate change tail risks mean that careful consideration is required when incorporating them into our modeling. In some situations, it will be possible to explicitly simulate the effects within existing frameworks – for example, cross-peril correlations can be included in simulation-based capital models. However, it will be more challenging for other risks, particularly those with socio-economic and systemic components.

Fortunately, there is an analogue in current catastrophe modeling frameworks when it comes to representing socio-economic factors that are difficult to quantify: post-event loss amplification (PLA). PLA is applied to the most severe events, such as Hurricane Katrina in 2005, to account for a range of complex tail sources of loss, including economic demand surge, long-term evacuation and systemic economic problems. 

The same approach could be used to model climate change tail risk using “climate loss amplification” (CLA). The more severe an event, the larger the CLA factor, reflecting the increasing likelihood of socio-economic effects materializing, such as geopolitical conflict and infrastructure failure.

When you next think about building or updating your climate change scenarios, remember to consider not only how to best adjust frequencies and severities but also the completeness of your risk assessment.

Building Resilience for Future Generations

Updated building codes have saved the U.S. over $1.5 billion in avoided losses annually since 2000. Insurers must promote such practices.

hands doing a wooden puzzle

What does it take to protect our communities and build a safer world?   

Climate change, natural disasters and other factors elevating property risk worldwide are driving action across society, whether from governmental agencies, technology leaders or property insurers. Groups vary in their approach, but they share a goal: to build resilience on a global scale. To reach this goal together, we need to have honest, open conversations about our new normal – particularly how insurers can work to secure the safety of future generations.  

The Formula: Survivability and Recovery 

There are two fundamental components to resilience. The first is survivability. If severe weather hits a property, what will the damage be and how can it be minimized through specific physical design improvements? The second is recovery. When a property is damaged, how difficult is it to rebuild in a timely manner?  

To answer these questions, insurers are starting to pay attention to government initiatives, IBHS quality standards and innovations in AI to tackle resilience. Close collaboration with both public and private sector groups will be key to build resilience in our cities and neighborhoods.

Survivability and recovery are complex issues that require both government intervention and capital incentives. One obvious approach is to construct buildings with damage-resistant materials, such as hurricane-proof glass and fire-resistant insulation.  

Groups such as the National Association of Home Builders and the Federal Emergency Management Agency (FEMA) have compiled a litany of recommendations based on the latest developments in sustainable and resilient construction. However, these materials tend to be marginally more expensive and largely are not mandated. Additional efforts are needed to encourage the application and adoption of resilient construction techniques, particularly as pan-global re-insurers are seeing the benefit of construction quality indexes. 

One way to accomplish this is to modernize building codes and fast-track accompanying permits where construction is aligned to mitigation designs. A recent FEMA study shows that updated codes have saved the U.S. over $1.5 billion in avoided losses annually since 2000. Naturally, part of this effort will depend on direct government action.  

The Biden administration, for example, recently earmarked over $200 million to update codes and provide incentives for sustainable construction. That being said, the government's influence over building codes is limited and piecemeal at best. It is often easier to establish incentive structures than to enforce strict state-wide mandates. In other words, it is carrots and not sticks that will move the needle on property resilience in the U.S. This is an area where insurers have a unique opportunity to drive influence by insisting on greater resilience. 

See also: Empowering the Underwriter of the Future

Encouraging Resilience 

The model for encouraging resilience has long existed in the insurance industry. Auto owners with a track record for safe driving can obtain discounted rates; primary carriers can improve their reinsurance premiums by mitigating the level of risk exposure in their book. Encouraging safer property construction with mitigation strategies from IBHS and others is no different. All it requires is that insurers resolve to become active promoters of resilience. Take the collaboration between the state of Queensland, the Commonwealth Scientific and Industrial Research Organization (CSIRO) and the insurer Suncorp as they adapt for wildfire in Australia. 

Opportunities to collaborate present themselves throughout the policy lifecycle. When writing new business in hazard-prone regions, insurers can leverage new technologies (such as computer vision and machine learning models) to precisely determine structural vulnerability based on factors like tree overhang, roof condition and age and wildfire defensible space. They can then offer policyholders a choice: Eliminate the risk drivers for a better premium or do nothing and remain exposed. If insurers put in place consistent incentive structures through AI-based learnings, the market will naturally encourage more resilient construction, leading to less costly losses. 

Beyond new business, insurers should take advantage of the recovery and claims process to build resilience. This shift in mindset is already starting to happen. Historically, insurers have responded to loss events by returning a property to its pre-damaged condition. But in an increasingly risky world, this may no longer be the best option.  

Instead, insurers should motivate policyholders to rebuild stronger properties using the latest, damage-resistant materials and design techniques that increase robustness and meet the new building codes for natural peril. It may seem counter-intuitive to say insurers should put more money into replacing damages than was actually lost. However, by expanding the classical concept of indemnity to prevent future damage, insurers will end up saving more on avoided claims in the long run.  

See also: The Biggest Opportunity for Innovation

Actively Promoting Resilience 

Whether it’s through legislation or technological advancements, the intensity of human capital working tirelessly across the globe shows there are pockets of hope. But they, and we, can’t do the work alone, as it requires a herculean effort across many fronts. We need greater collaboration at federal, state and municipal levels with the property insurers themselves. Conversely, property insurers will struggle to survive in the years to come if they don’t start promoting resilience today.  

Equally, as new communities are being built to cater to new lifestyle demands and promote climate adaptivity, insurers need a seat at the table to influence smart city planning. This collaboration is the change management of the future, and it needs to start now to have an impact. There are more reasons to do so than sheer altruism. Everyone, not just insurers, should be taking a pragmatic view. If properties aren't made more resilient, the cost of major loss events will increase exponentially. And if an insurer’s only response is to cancel policies en masse in high-risk peril regions, policyholders will lose their trust in carriers. 

SUVs Are Confounding Auto Insurers

Historical data can be an unkind partner in times of sudden change. For risk-based pricing for auto insurance, these are such times. 

White SUV underneath starry sky

KEY TAKEAWAY:

--Auto insurers are chasing rate increases -- but based on outdated models. SUVs are holding their value much more like trucks than like cars, so traditional models don't reflect the value at risk with the current auto fleet, and consumers are buying lots of them, so their share keeps growing.

----------

A funny thing happened on my way to Japan (courtesy of an untimely layoff). I spent the Monday before the Tuesday morning flight from Los Angeles to Tokyo doing a “Marty” and tailgating at the fall meeting of the Casualty Actuarial Society (CAS) at the Westin Bonaventure Hotel & Suites. 

A “Marty” in this respect is showing up in the public space around an industry event and connecting with friends and new contacts – a handy way to network while unemployed and off the corporate expense account. (I do get clearance to pop in informally – a nod to the decades of presentations I have made at these types of gatherings.)

Two years prior (almost to the day), I addressed a crowd of executives at the American Property Casualty Insurance Association (APCIA) meeting on customer satisfaction, empathy and of course, risk-based pricing. The economists at the Bureau of Labor Statistics (BLS) had recently adopted a new method to generate insights on the cost of vehicles – new and used. The method was a seeming sea change from economic tradition of trended attributes over time, to a very modern and big data approach that sampled actual sales transactions. While the BLS techs cited a variety of data sources they could use, they affirmed that data from J.D. Power was their current source.

In a whirlwind 24 months, I took a crash course into all things automotive, and here in my 25th month I am sharing how to apply my newly gained knowledge into the 25 years of insurance analytics I've witnessed since moving from healthcare to P&C back in 1998. Far from this being a soliloquy, I have had literally dozens upon dozens of conversations with actuaries, product managers, data scientists, executives, vendors, competitive researchers, consultants, industry analysts and regulators.

Here is where things are…and likely what’s next.

See also: Automakers Build New Insurance Future

The Value of a Vehicle Has Lost Meaning

In the lobby of the Westin, I held court at breakfast tables and at the lobby tables – those floating by the central coffee bar and the ones on the other side by the pseudo-café near the rideshare rally point. Chief actuaries, CAS dignitaries, session speakers and a raft of long-tenured attendees variously stopped by, including a bunch of colleagues from my ISO days (2006-2014), many of whom have found leadership roles both at and beyond ISO.

Restless quarters of broad-based rate taking had started to improve loss ratios. Progressive had just posted a great third quarter and was expected to begin to grow again (on purpose) soon. Investments were contributing to improving loss ratios, as well. But there was a definite “something missing.” Some explanation was needed for why everything done so far was still largely not enough, especially for physical damage.

The ”I told you so” temptation was palpable, but being out of work did temper my flair. Instead, I walked folks through the presentation I made the Friday before -- “Collateral Valuation Risk,” at the Model Risk Managers’ International Association Summit on Best Practices. 

The data explaining the value at risk for vehicles was at odds with current pricing practices developed using decades of experience. At the heart of the issue – the SUV. 

This style of vehicle, the SUV, acts like a truck for holding its value longer than cars, and it also dominated all new vehicle sales in recent years. The change in product mix was blowing up historical trends, and then the pandemic magnified the effect in every possible way – used values, replacement values, trim and option upgrades and “above MSRP” sales across the country. 

Existing "set it and forget it" valuation methods used a traditional two-step pricing classification for physical damage base rates: (step 1) MSRP Base Price New and (step 2) the staid and systemically used Year-Model Factor tables. This process had never seen a body style product mix shift like the rise of the SUV in the prior 50-plus years in use.

What Goes Up Stays Up

While there is no doubt added turmoil when mixing in electric and hybrid power trains and all sorts of ADAS technologies, the simple value of the subject at risk clarifies why base rate bloat is not improving things as hoped. 

The older vehicles in operation are mostly automobiles and are already at their lowest factor in the Year-Model Factor tables – both national tables and state-specific tables level off near the 30% of base price new figure. Increasing premium on those vehicles by a large percentage is offset by its small marginal contribution. On the other end of the spectrum, newer SUVs were retaining value and even selling above asking price. So, eight out of 10 newer vehicles broke ranks with cars, and the value of the subject at risk was much higher than historical Year-Model Factors.

We don’t need slide rules and integral calculus to figure out that when the value of what is being insured goes way up, the price to risk should, too. Comically, the consumer interest in more trucks (SUVs) triggered this outcome – the pandemic, ADAS, supply chain and the EV things are just normal, and abnormal, cyclic happenings.  

Risk Management Opportunities in Auto Insurance

Risk-based pricing practices have evolved to deepen the segmentation of both the risk of a risk and the value of a risk.  Pricing accuracy may improve along perils, coverages and loss costs and be associated to the changing value of the subject at risk. That is, except for standard personal auto insurance, where the traditions for valuation of the subject at risk have left the industry too inflexible to deal with changing times and changing consumer choices.

Most important for current and future pricing issues is that the change over time of consumer appetite for brands, body styles and optional features means that the mix of vehicles in operation and their loss-cost dynamics cannot fit into inflexible traditional processes.

See also: Crash Detection Will Transform Auto Claims – No, Really

Practical Advice Over Coffee and Follow-Up Calls

Many vehicles have wide dollar gaps between base configuration and “as built” configuration – meaning that on day 1 the insurance value is too low. As well, wide disparities of future values are found among brands, body styles and trim levels. What may make more sense is better alignment with “as built” beginning values for new vehicles and using actual cash values for similar “as built” used vehicles. You can still float a percent of “as built” rule for meshing in decades of prior premiums, but you need to get to current insurance-to-value levels for the policies in force today.

Rate-filing reviews across states and carriers conclude that in personal auto insurance the standard approach is to use easy-to-access estimates of base model new vehicle values (MSRP base price new) and then predict that all vehicles always get cheaper than that top value over time (a common Model-Year Factor curve regardless of brand or configuration -- body style, drive train, engine size, luxury upgrades or technology features).  

Sometimes the factor curve will vary by state versus a national curve to account for regional pricing economies, but each is monotonically descending and starts with the lowest MSRP for the vehicle in the make, model, year description. More granular uses of MSRP pricing are evident with the use of structured vehicle identifier number sequences, but the result is the same – none of the added optional features or "as built" features above the base price new are included in the value of the subject at risk.  

Both the value at risk and the risk of the risk are in flux with no clear end in sight. This makes old models and existing predictions sometimes unfit for use, while leaving new data and new models sometimes in the lurch while companies scramble to survive.

A mindful and auditable governance structure for analytics is emerging for next-generation models, yet more work needs to be done to transparently understand existing models and methods institutionalized over the last several decades.

The Biggest Opportunity for Innovation

In this Future of Risk Forecast, Nick Lamparelli says firms have mostly digitized -- but still can't find the information they need when they need it. 

Nick Lamparelli Future of Risk Forecast

 

nick l Headshot

Nick Lamparelli has been working in the insurance industry for nearly 20 years as an agent, broker and underwriter for firms including AIR Worldwide, Aon, Marsh and QBE. Simulation and modeling of natural catastrophes occupy most of his day-to-day thinking. Billions of dollars of properties exposed to catastrophe that were once uninsurable are now insured because of his novel approaches.


Insurance Thought Leadership:

What technology now in the market do you believe will have the biggest transformative impact on insurance and risk management in the next five years?

Nick Lamparelli:

I believe IoT and sensor technology is getting traction, and insurance has finally noticed. Sensors can do much around notifying when events are occurring or about to occur. These technologies will have the capability to reduce both frequency and severity of claims (significantly). These technologies make the insurer highly relevant day to day to the policyholder. Because of this, insurers can get more innovative around coverage design and user experience. Insurers that invest in embedding IoT stand to be seen through a different lens every time a notification on someone's smart phone tells them that "no water was detected" so they can truly feel at ease and get the sense that their insurance premium is actually doing something for them.

Insurance Thought Leadership:

What do you see as the biggest obstacles to insurance innovation, and how would you recommend overcoming them?

Nick Lamparelli:

The biggest obstacle is inertia. And inertia is a byproduct of culture. Many insurance organizations do not have the corporate governance that forces them to enact change (see the mutuals and farm bureaus). For many of these firms, only the cold hard reality of financial existentialism will force someone, anyone to make the changes that need to be implemented.

Insurance Thought Leadership:

What is an area that you believe remains untapped/unfulfilled/overlooked for the promise of innovation in insurance?

Nick Lamparelli:

Knowledge management. We spend way too much time looking for things: searching for files, attachments, emails, etc. Insurers have done a decent job digitizing, but it hasn't helped them because now they have a mountain of data and no ability to mine it. Knowledge management, including using AI technologies to make sense of that data, will be crucial and unveil a ton of insights into a company's own data sets.

Insurance Thought Leadership:

How do you believe AI will transform insurance and risk management?

Nick Lamparelli:

Aside from where it is used now (to detect fraud, mostly), I think AI in knowledge management will be the first big value-producing outcome of generative AI. (See my prior answer.)

Insurance Thought Leadership:

Climate risk is leading to more frequent and severe loss events and contributing to high premiums and insurers withdrawing from some markets. How do you see technology being applied to mitigate risks and improve resilience?

Nick Lamparelli:

First, the jury is still out on more frequent and severe events. I am not sure we know that as of yet. Second, it doesn't matter if the climate is or isn't changing. Mega weather events have always played a critical role in insurance and still do. Technology innovation in the engineering of extreme wind, flood and fire events are crucial to our resilience. We know how to build resilient properties, we need a supply chain and investment that can lower the costs so these technologies can get to more properties. Wind-resistant glass, roof protections, external mist sprinklers and flood gates are all expensive solutions, but they work, and I am confident that technology and innovation in the engineering spaces will drive down the costs of these things. I think insurance can play a critical role in encouraging that push by providing financial incentives so property owners want to invest in them.

Insurance Thought Leadership:

You’ve contributed a lot over the years to Insurance Nerds, which focuses on “Insurance, Careers and Technology.” Do you believe that insurance is becoming more attractive as an industry because of how it is leveraging technology to solve big problems?

Nick Lamparelli:

I do not think it's become more attractive. Young people don't know anything about insurance until they need to (usually when they buy a car or become an adult). By then, they see insurance through the various stereotypes associated with it, instead of the vibrant, fun and engaging career choice it could be. 


Insurance Thought Leadership

Profile picture for user Insurance Thought Leadership

Insurance Thought Leadership

Insurance Thought Leadership (ITL) delivers engaging, informative articles from our global network of thought leaders and decision makers. Their insights are transforming the insurance and risk management marketplace through knowledge sharing, big ideas on a wide variety of topics, and lessons learned through real-life applications of innovative technology.

We also connect our network of authors and readers in ways that help them uncover opportunities and that lead to innovation and strategic advantage.

Top 10 Tech Breakthroughs in 2023

The development of a free, industry standard for designing chips suggests that computing power will keep growing exponentially. 

Image
technology

When I transferred to the Wall Street Journal's San Francisco bureau in 1996, the bureau chief had a story all lined up for me. He had sold the front page editor on the notion that Moore's law was dying. After decades during which the computing power of a chip doubled every year and a half or so at no increase in cost, the technology for etching devices on chips seemingly couldn't be improved further. 

Every reporter lives to be on the front page, and that was especially true in those days at the WSJ, where the stories in column one and column six (known as "leders') were the stars of each day's paper. But as I reported the story, I couldn't convince myself that Moore's law was dead. Yes, the current strain of etching technology was hitting the limits of physics, but there were other, albeit highly speculative, approaches that might lead to a breakthrough, and an entire industry needed to make one of them work. So I declined to write the story.

And I'm so glad I did. The question at the time was how much smaller devices on computer chips could get than a width of .35 micron -- already impossibly small, it seemed at the time. But breakthroughs in etching technology did happen, and the generation of chips that is now being developed will use 2-nanometer technology, a factor of 175 smaller. 

Imagine if I'd become known as the guy who declared on the front page of the Wall Street Journal in 1996 that progress in chip technology had stopped. Because chips are two-dimensional, that 175-fold improvement in each dimension means today's chips can contain 175 times 175 as many devices on the same-sized semiconductor. That's an improvement by a factor of, oh, 30,625. 

There's been talk for the past few years that etching technology has, in fact, gone as far as physics will let it and that Moore's law is finally and truly dead. But the MIT Technology Review's annual look at the year's breakthroughs suggests chip technology will continue to improve at a furious pace, to the benefit of the insurance industry and our many customers.

I always enjoy the annual review because it stretches me beyond the sort of thing I typically think about. For instance, this year's list of the top 10 technologies covers: the James Webb telescope, which is peering more deeply into the universe than I ever thought would be possible; the ability to analyze ancient DNA to see what it tells us about our origins; and mass market drones, which are changing warfare at least as much as machine guns did in World War I and tanks did in World War II. 

This year, MIT highlights some developments that have the potential to revolutionize healthcare. The article talks about "organs on demand" -- initially by growing organs in pigs for transplant to humans, and over time ending the need for human donors. The piece also describes the use of CRISPR, the gene-editing tool, to reduce cholesterol. This comes on top of the recent approval of a use of CRISPR to cure sickle cell anemia, suggesting that all sorts of diseases could be cured -- not just treated. (Jennifer Doudna, who won a Nobel Prize for the development of CRISPR, describes the potential here.)

But the development that most directly affects insurance is the one that MIT Technology Review describes as "the chip that changes everything."

The key is that progress can continue at great speed even if the basic chip-making technology stalls. We've already seen the potential with AI. ChatGPT and other large language models are possible because specialized chips, initially designed for graphics for computer games, have turned out to be some 100 times as good at AI as general-purpose processors. In other words, with zero improvement in the underlying technology, we've still had a 100X improvement for a specialized, very important purpose.

MIT says that kind of improvement is now available to just about everybody because of a free, industry-standard tool that lets anyone easily design a chip for any purpose. Even if progress does finally slow on the basic technology for chips -- 30,625 times beyond where it was when I was encouraged to declare it dead -- we've just begun at optimizing the technology for the uses that matter to us.

Cheers,

Paul