The insurance industry is sitting on one of the most promising AI opportunities in all of finance — and it is almost entirely unable to act on it. Not because the technology isn’t ready, but because decades of messy, fragmented, siloed data have created a foundation that AI simply cannot build on. This is the quiet crisis at the heart of insurance’s AI ambitions, and it’s more instructive than any headline about chatbots or automation pilots.
A 2026 report from Autorek, drawn from surveys of 250 operations managers across the UK and US insurance sector, puts hard numbers to a problem many inside the industry already sense. The gap between what firms expect AI to do and what they’ve actually managed to deploy is striking — and telling.
The Expectation Gap Is Enormous
Eighty-two percent of insurance firms surveyed believe AI will come to dominate their industry. That’s not a fringe view — it’s the overwhelming consensus across leadership. And yet only 14% of those same firms have fully integrated AI into their operations. Six percent report no AI use at all.
That’s not a technology problem. That’s a readiness problem. And the distinction matters enormously, because it changes what the solution actually looks like. You can’t buy your way out of this with a better AI vendor. The bottleneck sits upstream, in the data itself.
What “Fragmented Data” Actually Means in Practice
The firms surveyed managed an average of 17 separate data sources. Think about what that means operationally: 17 different systems, potentially with different formats, different update schedules, different ownership structures, and different levels of reliability. Now try to train an AI model on that — or ask it to make consistent, auditable decisions.
A useful analogy: imagine trying to navigate a city using 17 different maps, each drawn at a different scale, in a different language, and updated at different times. Even a brilliant navigator would struggle. AI faces the same structural problem when data estates are this fragmented.
The fragmentation gets significantly worse after mergers and acquisitions — a routine part of the insurance landscape. Every acquisition brings another set of legacy systems, another data vocabulary, another reconciliation headache. The data layer doesn’t merge just because two companies do.
Why Reconciliation Is the Right Place to Start
The Autorek report makes a practical recommendation that deserves attention: start with reconciliation processes. On the surface, this sounds like inside baseball. But it’s actually a strategically smart entry point for AI.
Reconciliation — matching transactions, identifying discrepancies, closing the books — is rules-based, bounded, and measurable. You can define success clearly. You can track error rates before and after. That makes it an ideal proving ground, the kind of contained environment where AI can demonstrate value without requiring the entire data infrastructure to be sorted first.
It’s the equivalent of testing a new engine in a controlled environment before putting it in a full vehicle. The learnings are real, the risks are contained, and the wins are visible to leadership.
The Structural Problem That Won’t Go Away on Its Own
Here’s what makes this situation particularly difficult: awareness of the problem is widespread. The report notes that these findings echo previous publications, and that managers broadly understand the issues. The bottleneck isn’t knowledge — it’s the cost and complexity of fixing legacy infrastructure while simultaneously keeping operations running.
Transaction volumes in the sector are projected to rise roughly 29% over the next two years. Operating costs are expected to rise in lockstep. That’s the dangerous trajectory: volume goes up, manual processes can’t keep pace, errors multiply, and costs climb. AI is supposed to break that cycle — but it can’t if it’s sitting on a fractured data layer.
Traditional rule-based automation, known as RPA (robotic process automation), has already hit this wall. RPA works well when data is clean and processes are consistent. When the data is fragmented, RPA becomes expensive to maintain and limited in what it can actually resolve. AI, theoretically, can handle messier inputs — but even AI has thresholds below which data quality becomes prohibitive.
Key Figures: Insurance AI Readiness at a Glance
| Metric | Current Status |
|---|---|
| Firms expecting AI to dominate the industry | 82% |
| Firms with fully integrated AI operations | 14% |
| Firms reporting zero AI use | 6% |
| Average number of data sources managed per firm | 17 |
| Projected transaction volume increase (next 2 years) | ~29% |
| Primary AI barriers identified | Legacy systems, fragmented data, limited internal expertise |
| Recommended AI entry point | Reconciliation processes |
Cloud Platforms as a Practical Bridge
The report points toward cloud-based AI platforms — rather than in-house builds — as a practical path for firms struggling with data fragmentation. The reasoning is sound: cloud platforms can more flexibly connect disparate data sources, apply standardisation layers, and scale without requiring firms to rebuild their entire infrastructure first.
This isn’t a silver bullet. But it’s a pragmatic middle path between “wait until we’ve fixed everything” (which never happens) and “deploy AI on top of broken foundations” (which fails expensively). The firms that move on this will widen the performance gap over those that stay still — and in insurance, operational efficiency compounds over time.
What This Signals for the Next 12–24 Months
What’s happening in insurance is a preview of what most traditional industries will face as AI matures. The technology is outpacing institutional readiness. The firms that close the gap won’t necessarily be the ones with the most sophisticated AI — they’ll be the ones that invested earliest in data governance, standardisation, and infrastructure modernisation.
Expect to see a meaningful divergence in the insurance sector over the next two years. Firms that treat data infrastructure as a strategic priority — not an IT project — will begin to show measurably lower operating costs, faster settlement cycles, and fewer reconciliation errors. Those that continue deferring will find the gap increasingly expensive to close, especially as transaction volumes climb.
The AI race in insurance, it turns out, will be won in the data layer long before it’s won with any particular model or platform.
If you found this analysis useful, I’d encourage you to explore our related coverage on AI in financial services and enterprise automation — the patterns emerging in insurance are appearing across banking, logistics, and healthcare with striking consistency. The data problem is universal. The solutions are just beginning to take shape.