How AI Can Transform Workers' Comp

AI can slash processing times in half, Wisedocs CEO Connor Atchison says, but humans must stay involved to build trust.

interview banner itl focus workers comp

Paul Carroll

At ITL, we've been encouraging the insurance industry to move to a Predict & Prevent model and away from the traditional repair-and-replace approach. Workers' compensation has been a poster child as organizations make remarkable strides in reducing workplace injuries. But there's significant complexity below the surface. What are the key challenges around volumes, documentation, staff shortages, and legacy systems?

Connor Atchison

I think you summed it up right there. It's the culmination of all of these things over decades that are making things slower and more cumbersome. We have gaps in knowledge as we strive for better care outcomes—to get that worker back to work and make sure we're spending the right amount of money on the right treatment to make that happen.

There are definitely issues around legacy systems. Workers' comp, even more than other insurance lines, is still a little bit behind. But they're catching up and adapting, and they're seeing the need, which is great.

The volumes are certainly high. When I was doing adjustment work during my time in health administration serving in the Canadian Armed Forces, I'd be sitting there highlighting and tabbing documents. You can't scale that, and that's why we need more technical innovation.

Then there's the knowledge loss. How can we retain the knowledge from someone who's been in the industry for 30 years and is retiring? Younger generations aren’t filling the gap, so technology needs to help retain that knowledge.

We're also seeing a lot of changes at the state and federal legislation level. They're getting more stringent on costs and more stringent on audits and understanding where the money's going.

You put all those together, and it's almost like a perfect storm. Technology is definitely needed right now, more than ever.

Paul Carroll

You recently commissioned a survey of claims professionals with PropertyCasualty360 “AI in claims and the 4x trust effect of human oversight” focusing on AI’s rising role in claims and wrote an article for us about how important human oversight is for generating trust. While I’ve certainly heard lots about the importance of a human-in-the-loop, your survey still surprised me.

Connor Atchison

Yes, we found that human oversight increased trust in AI by up to 4X. People are concerned about quality, accuracy, outcomes, and liability if human oversight isn’t there. Once you have an expert-in-the-loop, you build trust for the end user. They can see why the machine learning is processing the document the way it is, while being trained on industry domain knowledge and over 100M+ claims documents. That allows them to say, "This makes sense. I can understand it, and I can make my own inferences."

The other thing that really stood out to us was that 75% of the claims professionals we surveyed said they believe AI can improve efficiency through better speed and resource optimization. So on one hand, they’re clearly seeing the upside. But at the same time, there’s this capability–trust gap: they know AI can help them, they’re just not fully comfortable trusting it on its own yet.

What they do trust is themselves — and human oversight. When an expert is in the loop, they feel confident that the AI is being guided, validated, and grounded in real industry knowledge. That’s why the combination matters so much. As our Head of Machine Learning always says, we use AI for scale and humans for accuracy. That pairing is what closes the gap and ultimately builds trust.

Paul Carroll

In insurance, in general, and workers' comp, in particular, there are numerous silos—employers, bench adjusters, nurse case managers, medical examiners, treating physicians, vocational rehab specialists, and legal counsel. How does AI help resolve the silo problem in workers' comp?

Connor Atchison

I don't believe it's a silver bullet. There’s no point solution that can do it all. But AI can plug into your day-to-day workflows and take 15, 16, 20 steps and shrink that process to half or maybe a third. 

No one wants to sit there for eight hours going through a 2,000-page document. It just doesn't make sense to do this clerical work any more. It makes sense to elevate the human to make better use of their time and spend more time analyzing and making expert decisions. 

Paul Carroll

You said at a recent conference with the Division of Workers’ Compensation in California that AI can cut claims processing time in half, making claims documentation platforms essential. But speed raises the stakes—how do you ensure compliance and trust keep pace with automation?

Connor Atchison

I think there are two answers to that question. First is the human in the loop, which we’ve already talked about. Beyond that, it's about building really good technology.

Going back to what I said about point solutions: Just putting data into an LLM [large language model] or SLM [small language model] or a foundational model isn't going to give you a result that helps the injured worker or the adjuster. It's going to give you something very high-level. When you go deeper and build all the different configurations around that data and the workflows, that's where you actually start getting leverage.

When you go to the next step, you have to look at the compliance standpoint. You have to know, and be able to demonstrate: Why is the model providing that information the way it has, based on the data it's been trained on?

I remember talking with the Honorable Judge Rassp from California about this. A judge doesn’t want a black box. They need to know objectively why something happened. If you can't objectively define where you've gotten the information, you run into problems. 

That's why point solutions are not the definitive answer. It's about building the entire workflow—creating transparency, understanding what legislation means, and why you have to follow different audit guidelines, time periods, or reviews.

Paul Carroll

Large language models started as generalists, ingesting all available information. Now it's possible to train AI on specialized information to produce vertical AI, providing more precise results in complex fields like workers' comp. What does this verticalization involve?

Connor Atchison

We don't have to use an LLM to do everything. We can find models that train on extractive data and give a higher confidence score and better outcomes. If all you have is a hammer, everything looks like a nail. But sometimes there's an easier, more cost-efficient way.

When we train in a very vertical way, you have to start looking at what the governments and states are doing. With on-prem deployments, we're starting to see regulated industries needing more and more of their own models on their own data to understand outcomes more effectively. There are a lot of reasons for that—compliance, security, risk—but there's a huge opportunity here if you have the data and know how to orchestrate it the right way and build it with the right outcomes.

These organizations can actually accelerate, and I don't think we can use generalist models. If you don’t fine-tune your technology and don't understand what it's trying to do, you're just going to have garbage coming out. Garbage in, garbage out.

Paul Carroll 

What does the feedback loop look like for humans to refine AI over time when it doesn't get something quite right?

Connor Atchison

Without getting technical, you can think of the top, middle and bottom of a workflow. You ingest information at the top and spit out a result at the bottom. The question is: Where do you want the human to come in?

That really depends on what you're trying to build and on the complexity of that data. At the ingestion point, for example, is the data curated? Is it clean? If you have chicken scratch from a doctor and can’t read it, the machine can’t read it, either. 

Maybe it's in the middle where you want things fine-tuned. If your models are confident and you understand the scoring—the actual mathematics—then you can have a human review at the bottom.

I think you need a combination of all of them.

Paul Carroll

Given the unique dynamics of workers' compensation—from complex injury types to vocational factors and return-to-work timelines—how do you see AI reshaping the medical record review process? And what are the key operational or regulatory hurdles insurers and TPAs must overcome to realize those efficiency gains?

Connor Atchison

The way I see it, you need a platform, not a point solution. You can’t just focus on workflows and configurations. You have to understand your data and use those unique datasets to get cross-case analysis and different inferences on how current injuries are being treated. Are they being efficiently treated? Is there waste? Are there better treatment outcomes? When you can surface all of that—and there are so many states and organizations sitting on this information but not really getting it surfaced—that could help immensely.

To really stay in front, you need to always see where things are moving from a state-by-state jurisdiction and also from a federal level. Texas has new laws coming into place. Louisiana just passed a few bills. And so on. You don't want to build something and then realize, "Whoa, this doesn't work," and have to retool your solution stack. 

I think this is where users and buyers are getting fatigued. It's so noisy out there, and only a handful of companies really know what they’re doing. Everyone's got an AI solution with an LLM. 

Wisedocs put together a buyer's guide to evaluating AI-powered claims documentation platforms to help people ask the right questions. Some issues are really important to understand, such as security and compliance primers or whether you should build vs buy. You need to get this right the first time, not the second or third time.

Paul Carroll

Returning to the recent survey you commissioned of claims professionals with PropertyCasualty360, yit found that 75% of respondents believe AI can improve efficiency in claims, yet 58% aren't using it. As we wind up here, what's your vision for where the industry could be in two to three years in terms of both technology and adoption?

Connor Atchison

I've seen studies showing that anywhere from 60% to 90% of AI initiatives are still in pilots and proofs of concept. The adoption isn't there. AI is great, but it's not connecting with the actual business need.

If I buy a Bugatti, I don’t want it to drive like a golf cart, and I think many people have been disappointed because they need to better understand their needs and the parameters of how to use AI. There are different needs for every business and use case.

But this is a really exciting time. Success is all going to come down to data. Understanding your data and then building around that—that's what's going to win, and that's why I think there will be organizations that are the haves and others that are the have-nots. 

Paul Carroll

Thanks, Connor. 

About Connor Atchison

headshot

Connor Atchison is the founder and CEO of Wisedocs, a platform for reviewing medical records.

Atchison is an experienced founder with a history in health services, information technology and management consulting. He is a veteran, with 12 years of military service under the Department of National Defence.


Insurance Thought Leadership

Profile picture for user Insurance Thought Leadership

Insurance Thought Leadership

Insurance Thought Leadership (ITL) delivers engaging, informative articles from our global network of thought leaders and decision makers. Their insights are transforming the insurance and risk management marketplace through knowledge sharing, big ideas on a wide variety of topics, and lessons learned through real-life applications of innovative technology.

We also connect our network of authors and readers in ways that help them uncover opportunities and that lead to innovation and strategic advantage.

MORE FROM THIS AUTHOR

Read More