What Insurers Will Learn About Trust... the Hard Way

Banks lost customers' trust one automated interaction at a time. Insurers are making the same mistakes. 

Low-Angle Shot of a Tall Glass Building under the Sky

In 1979, Gallup asked Americans how much confidence they had in banks. Sixty percent said a great deal or quite a lot. Banks ranked second out of nine institutions — behind only the church.

Today that number is 26%.

The collapse didn't happen because of one crisis or one bad actor. It happened over 40-plus years, one automated interaction at a time. ATMs that replaced tellers. Interactive voice response systems that replaced those ATMs. Digital channels that replaced the IVR. And now AI-driven decisions replacing the digital channel that replaced the thing that replaced the person who used to know your name.

Each wave came with a business case. And each wave, when it touched the moments that actually matter to customers — a confusing charge, a decision that needed explanation, the thing that went wrong at the worst possible time — quietly withdrew a small deposit from an account that doesn't show up on any balance sheet.

That account is trust. And trust, it turns out, is an organizational capability problem — not a sentiment problem.

The Moment That Reveals Everything

Here's what I observed working inside a global bank during those automation waves: the technology worked. The process was faster. The costs came down. And customers were fine — until they weren't.

When something went wrong, people didn't want a faster process. They wanted a person who understood the situation, had the authority to act on it, and demonstrated that the institution they'd trusted actually cared what happened to them. What they got, too often, was a system designed for the average case, handling something that wasn't average at all.

What struck me wasn't the technology failure. It was the organizational failure underneath it. The leaders driving automation were making efficiency decisions. Nobody was accountable for the capability question: Does this organization know how to rebuild trust when the automated system fails a real person? The answer, in most cases, was no — because that capability had never been built. It had been assumed.

That pattern — confusing an efficiency decision for a capability decision and discovering the difference too late — is what eroded four decades of public confidence in banking. And it's the pattern insurers are now repeating.

This Is Now Insurers' Problem

Insurers are making the same bet banks made, in the same places banks made it.

Claims. Denials. Coverage decisions. Underwriting. These are not commodity interactions. They are, almost by definition, the moments when a policyholder is most vulnerable — a damaged home, a health crisis, a business interruption, a death. They are the moments that test whether the relationship the insurer sold is real.

The industry is automating them anyway. With AI systems that make faster decisions, with chatbots that handle first contact, with models that assess claims before a human ever sees them. The business case is real. The efficiency gains are real. The risk is also real — and it is being systematically underestimated.

Here's what gets missed in most of these conversations: The risk isn't primarily in the technology. It's in the organizational capability gaps the technology exposes. Does this organization have the judgment infrastructure to know when a claim needs a human? Does it have the change leadership — not change management, but genuine leadership capability — to ensure that the people still in the room when it matters are empowered to act? Can it tell the difference between a process that's working and a relationship that's quietly eroding?

Most organizations can't answer yes to all three. Not yet.

What Happens to the Humans Left in the Room

Here is the part the business case doesn't model: what automation does to the agents and claims professionals who remain.

When an organization systematically automates the high-stakes moments, it doesn't just remove humans from those interactions. It degrades the humans who stay. Authority gets stripped. Judgment gets overridden. The agent or adjuster who once had the latitude to assess a situation and act on it becomes an escalation path for complaints the system couldn't handle — without the context, the tools, or the organizational backing to actually resolve them.

This matters because the agent is still the face of the insurer when the policyholder calls. The claims handler is still the voice on the other end when the denial needs explaining.

The data on this dynamic in financial services is stark. An Eagle Hill Consulting survey of more than 500 U.S. financial services employees found that 62% say their organizations have prioritized improving customer over employee experience — yet those same employees report that their own work experience directly affects their ability to serve clients. Dissatisfied employees are more than three times as likely to report that their negative work feelings reduce their willingness to help others.

Deloitte's research adds another dimension: When AI tools are introduced without careful design and change leadership, employees perceive their organizations as nearly two times less empathetic and human. That dynamic doesn't stay inside the organization. It travels. Policyholders feel it.

For insurers that rely on independent agents — professionals whose loyalty is earned, not owned — the stakes are even higher. Think of independent agents as the community bankers of insurance: For decades, they've translated corporate rules into human terms, sitting across the table from policyholders at the moments that matter most. J.D. Power's independent agent satisfaction research consistently finds that scores are dramatically higher — by hundreds of points — when carriers make agents easier to work with: faster quotes, transparent claims status, access to a human on complex cases. When AI becomes a black box agents can't explain to a policyholder, that advantage reverses. An agent who can't get a straight answer on a claim denial, or can't reach a human on an exception, doesn't complain to the carrier. They quietly shift their next piece of business elsewhere. The trust problem isn't just with policyholders. It runs through the entire distribution chain.

The Balance Sheet Doesn't Show the Problem — Until It Does

What makes this dynamic particularly dangerous is that trust erosion is invisible on a quarterly basis.

The banking sector learned this the hard way in early 2023. When Silicon Valley Bank failed, uninsured deposits left the broader banking system at the fastest rate recorded since the FDIC began tracking data in 1984 — an 8.2% quarterly decline, industry-wide, in a single quarter. The FDIC noted that SVB's deposits were "remarkably quick to run" precisely because they were concentrated among depositors whose trust, once shaken, had no friction to slow it.

Insurers don't face bank runs. But they face their own version: policy non-renewals, lapse rates, coverage migration, claims disputes that become regulatory attention, and the slow erosion of the trusted advisor position that has historically made insurance a relationship business.

The erosion rarely announces itself. It accumulates in policyholder satisfaction scores that drift, in agent feedback that doesn't make it up the chain, in claims handling data that gets read as operational variance rather than relationship signal. By the time it's visible on the balance sheet, the capability gap that caused it has been open for years.

This Is a Capability Problem. Capability Can Be Built.

The research on AI deployment in financial services confirms what the banking experience suggests. McKinsey finds that AI high performers are more than 1.5 times as likely to have changed their standard operating procedures and talent practices — not just deployed tools. MIT CISR shows that firms stuck in the pilot stage financially underperform their industries, while those that have embedded AI into their operating models significantly outperform.

What those numbers describe, underneath the data, is an organizational capability gap. The high performers aren't distinguished by better technology. They're distinguished by having built the mindsets, the skillsets, and the operating conditions — the governance, the decision rights, the human judgment infrastructure — that allow them to absorb what the technology makes possible without losing what made them trustworthy.

That's the real lesson from banking. The institutions that automated their way into a trust deficit weren't led by people who didn't care about customers. They were led by people who treated trust as a communications challenge rather than a capability one. They managed it. They didn't build it.

Insurers now face a choice that banks didn't get to make deliberately. Insurers can design AI deployments that preserve human judgment at the moments that matter most. They can build the change leadership and workforce capability that determines whether AI enhances the relationship or quietly erodes it. They can treat trust not as a sentiment to be managed after the fact but as an organizational capability to be built before the moment of truth arrives.

Or they can assume their situation is different from banking.

Banks assumed that, too.


Amy Radin

Profile picture for user AmyRadin

Amy Radin

Amy Radin is a strategic advisor, keynote speaker, and Columbia University lecturer focused on why transformation succeeds or stalls in large, complex organizations. 

Drawing on senior leadership roles at Citi, American Express, and AXA, including one of the world’s first corporate chief innovation officer roles, she helps leaders build the capabilities required to absorb, scale, and sustain change.

 

MORE FROM THIS AUTHOR

Read More