There is a version of the AI-in-workers’-comp conversation that never gets off the ground. It sounds like this: “Our people aren’t going to use it.” Or: “They’re worried about their jobs.” Or the version that tends to end strategy sessions early: “We’ll deal with the people side once the technology is in place.”
That last one is where most AI implementations go wrong. Whether a technology project succeeds comes down to the people question far more than the platform question. And in workers’ comp, where outcomes depend on judgment, relationships, and institutional knowledge that can’t be automated, getting the workforce transition right is an operational priority, not a secondary concern to revisit after go-live.
The Evolution of Roles: What Changes
The jobs don’t disappear. The shape of the work changes. And when that transition is managed well, those jobs become more rewarding, not less demanding in ways that matter.
Claims adjusters spend less time on administrative processing and more time on the complex cases where human judgment counts. AI handles the intake extraction, the fee schedule validation, the documentation routing. What’s left for the adjuster is harder and more meaningful: the injured worker who needs real communication, the claim that doesn’t fit the model, the return-to-work situation that requires negotiation and relationship. The administrative burden that drives burnout shrinks. The work that drew most people to this profession expands.
Underwriters shift from data gathering to data interpretation. When AI can synthesize risk signals from thousands of data points in seconds, compiling the picture matters less than knowing how to read it. Pattern recognition at scale becomes more important than manual analysis. Domain expertise becomes the differentiator.
Customer service roles move toward exception handling and relationship management. Routine status inquiries and documentation requests get handled by AI systems around the clock. The human role becomes the escalation point, the relationship anchor, the voice that shows up when the situation is too complex or too sensitive for automation to manage. It is a genuinely more skilled position than the one it replaces.
Across all three functions, the direction is the same: AI absorbs the transactional work, humans own the consequential work. The transition requires honest acknowledgment of that shift, not vague reassurances that “everyone’s jobs are safe.”
The Skills That Will Matter Most
When AI handles the routine, the premium on certain human capabilities rises. The skills that have always separated the best performers in workers’ comp become the baseline expectations for everyone.
Critical judgment under uncertainty. AI can surface risk signals and flag outliers but deciding what to do about them remains entirely human work. The ability to weigh incomplete information, apply contextual knowledge, and make a defensible decision is not something a model replicates.
Communication and relationship management. Injured workers are often scared, in pain, and navigating a process they don’t understand. Employers are worried about costs, liability, and getting people back to work. The ability to hold a productive conversation with both, simultaneously, is something worth developing, measuring, and rewarding. No AI is going to do it.
Data literacy. Being able to read AI-generated outputs with appropriate skepticism, understanding what a model’s recommendation is based on and knowing when to override it, is a core competency for every role in an AI-enabled claims operation, not just the technical staff.
Workflow design thinking. The people closest to the work are the ones who know which processes no longer make sense. Organizations that develop this capacity internally, asking what AI should own, what humans should own, and where the handoffs belong, make better implementation decisions and depend less on vendors to figure it out for them.
Building Training Programs That Deliver
Most AI training programs make the same mistake: they focus on how to use the tool rather than how the work has changed. A two-hour software tutorial is not a workforce transition. It reads as a checkbox that creates the appearance of preparation without the substance.
Effective training starts before the technology goes live, with honest conversations about why the change is happening, what it means for specific roles, and what will be expected of people on the other side. It continues after go-live, with structured coaching that treats adoption as a learning process rather than a compliance requirement.
The most effective programs I’ve seen in this industry share a few characteristics:
- They involve the people doing the work in the design. Staff who help shape the training understand what their colleagues need. They surface concerns that wouldn’t show up in a top-down design process and become advocates rather than skeptics.
- They are role specific. An underwriter and a nurse case manager are not learning the same things about AI. Combining them in a single session produces training that’s relevant to neither.
- They treat skepticism as information. When an experienced adjuster says the AI recommendation doesn’t make sense for a particular claim, that’s worth exploring. Either the model has a gap, or the adjuster is applying an assumption that doesn’t hold. Both are useful to know.
- They build in structured feedback loops. The goal of early adoption is not perfect usage. It is learning what works and what doesn’t in the live environment. Programs that capture that feedback and act on it improve faster than those that treat go-live as the finish line.
Recruiting for an AI-Enabled Organization
The talent you need to succeed with AI combines workers’ comp domain expertise, comfort with ambiguity, and a genuine willingness to work alongside tools that will continue to evolve. That’s a different profile than technical aptitude alone.
Think less about “person who is good with computers” and more about someone who can operate effectively when the process they learned six months ago has been updated and they need to adapt without losing stride.
Recruiting strategies that reflect this look for evidence of adaptability, not just technical familiarity. They assess how candidates have responded to change in previous roles. They create realistic previews of what the job involves, including the AI tools and how they’re used, rather than describing a traditional role and hoping new hires adjust.
Increasingly, organizations that are intentional about this are building pipelines from adjacent fields where data literacy is a baseline expectation. They’re partnering with universities and certification programs that train for the AI-enabled workplace and being honest in job descriptions about the skills that matter now versus the ones the role will require in three years.
Overcoming Resistance: Communication Strategies That Work
Resistance to AI is reasonable. It comes from incomplete information, past experiences with technology implementations that didn’t deliver, and legitimate concerns about job security that leaders sometimes deflect rather than address. Treating it as an obstacle to manage rather than a signal to understand is how organizations stall.
The communication strategies that work are built on specificity. “AI will handle first-notice-of-loss extraction” is a message people can orient to. “AI is going to transform how we work” is not. Specificity reduces anxiety because it gives people something real to react to, rather than a vague threat to project their fears onto.
Being honest about the transition costs matters too. There will be a period where using new tools is slower than the old way. That’s normal and worth naming. Organizations that pretend the adoption curve doesn’t exist lose credibility with their teams when it shows up anyway.
And showing results beats describing them. When teams across the operation see that AI genuinely reduces the administrative overhead that has been grinding them down for years, resistance drops faster than any change management program produces. The most effective advocates tend to be the people who got three hours back in their week and used them to focus on the work that actually matters.
What AI Cannot Replace
This is worth saying plainly, because the industry’s most important outcomes depend on it.
An injured worker who feels heard moves toward recovery differently than one who feels processed. A return-to-work conversation that accounts for the real complexity of someone’s home situation, their relationship with their employer, and their fear about whether they’ll be able to do the job after recovery requires human presence and human judgment that AI can inform but never replace.
The same is true for the trust that makes an employer relationship work over time, the advocacy that gets a complicated claim to a fair resolution, and the institutional knowledge that an experienced professional carries about how certain types of injuries tend to develop, which treatment patterns tend to lead to better outcomes, and which early signals predict litigation risk before the model has enough data to flag it.
These capabilities are not being automated. They are being concentrated in the people who have developed them. Organizations that invest in those people will outperform those that treat workforce transition as a headcount optimization exercise.
Career Pathing in an AI-Enabled Organization
The career paths that made sense in a traditional claims operation look different when AI is embedded in the workflow. Some of the progression logic that has defined advancement in this industry, becoming more efficient at processing, managing larger caseloads, handling more routine work faster, becomes less meaningful when AI handles the processing.
Advancement in an AI-enabled operation tracks more closely to judgment, specialization, and the ability to manage increasingly complex situations. The adjuster who develops genuine expertise in a specific injury type or industry segment becomes more valuable as AI handles the generalist work. The team lead who can coach people through the human side of the work, including relationship management and communication under pressure, fills a role that no automation creates.
Organizations that think explicitly about what advancement looks like in this environment attract and retain better talent. People want to see a path forward. When the only visible trajectory leads toward roles AI is likely to absorb, the best people look elsewhere. When it leads toward roles that become more valuable as AI becomes more capable, the incentives align and retention follows.
Building the Culture That Makes This Work
The organizations that navigate AI adoption well share something beyond good technology and good training programs: a culture that treats continuous learning as the work itself.
That culture doesn’t come from an annual training requirement. It comes from leadership that demonstrates genuine curiosity about what’s changing, asks operational staff what’s working and what isn’t, and treats a process improvement as something to build on rather than an implicit critique of how things were done before.
It comes from team structures that give people time to engage with new tools intentionally rather than just trying to use them faster, and from performance frameworks that reward the skills that matter in an AI-enabled operation, not just the metrics that made sense in the previous one.
And it requires honesty about the fact that this is genuinely new territory. The workers’ comp industry is navigating a workforce transition without a complete roadmap. The organizations best positioned to get through it are the ones willing to build the learning culture that lets them adapt as the map gets filled in, rather than projecting certainty they don’t have.
True partners with carriers, TPAs, and captives, MGAs, and self-insured groups navigating every phase of this transition. If your organization is figuring out how to build the workforce that gets the most from its AI investment, reach out to Ryan Smith, Senior Solutions Advisor at True, to talk through what that looks like in practice.
Sources
- True Insurtech Solutions: A Brief History of the Insurance Talent Crisis
- True Insurtech Solutions: The ROI of Employee Wellbeing
- Risk & Insurance: Generative AI Reshapes Workers’ Compensation as Insurers Race to Transform Operations
- McKinsey & Company: Insurance 2030, The Impact of AI on the Future of Insurance
- Digital Insurance: AI Is Accelerating Digital Transformation (citing Boston Consulting Group)
- True Insurtech Solutions: Employee Experience and the Future of Work in Workers’ Comp
- Harvard Business Review: Reskilling in the Age of AI (Tamayo, Doumi, Goel, Kovács-Ondrejkovic, and Sadun, September–October 2023)
- Accenture: The Guide to Generative AI for Insurance
- NCCI: Annual Insights Symposium 2025 Highlights Report (Tracy Ryan, President and CEO, NCCI)