Why the future of insurance still needs human judgement
Insurance has always been a world of judgement, expertise and human connection. Now, AI is pushing into that world in ways that are hard to ignore, and the technology is undeniably impressive. Some in the industry even speculate that AI may soon replace insurance agents entirely. But how far has adoption actually reached, and what does the reality look like from a broker’s vantage point?
For some time now, AI has been making inroads across the industry, but mostly in ways that complement, rather than replace, human expertise. “AI has been taken up by many carriers in some form, whether in assessing underwriting submissions, supporting risk surveys or data analysis,” says Dominic Quick, Deputy CEO at QRG.
Much of what passes for AI is legacy automation: tools that automate routine tasks, accelerate data handling and help underwriters work more efficiently. The newer generation of AI, capable of interpreting nuance, learning from unstructured data or assisting with complex decision-making, is fast emerging, but often in pilot programmes or early trials.
“Research in the US shows that around three-quarters of insurers have already implemented generative AI in at least one business function,” says Quick. “It’s a strong sign that the market sees potential but is cautious about replacing human judgement too quickly.”
The caution is well-founded: humans are still better at navigating complex scenarios. As Quick points out, experienced underwriters know how to probe for warning signs, ask the right questions and uncover nuances that machines simply cannot detect.
When risk gets complex
For QRG, this is especially true in high-stakes specialist sectors such as marine, energy and cyber, where decisions depend on intuition, experience and a deep understanding of complex, sometimes unpredictable risks. “It is hard to imagine a time when all of this specialism could be replaced by a machine,” notes QRG COO Lance Grant.
“Accuracy is critical, and AI can’t always guarantee it,” warns Quick. “These tools can be useful in areas like claims, but when it comes to risk assessment and acceptance, you need to exercise caution, especially for complex or specialist risks.”
New technology, new exposures
Beyond underwriting, AI introduces a whole new set of risks. For Grant, cybersecurity is a particular concern: “Frontline AI systems themselves can be attack vectors, with breaches potentially affecting underwriting quality and decision-making.”
There are also legal and regulatory implications. AI missteps can put personal data at risk, breaching privacy laws. Mistakes or omissions by AI – such as mispriced risks or faulty advice – can trigger errors and omissions claims, forcing insurers to extend coverage to account for these new exposures and drive up premiums.
“Incorrect advice and AI ‘hallucinations’ could take the form of reliable-sounding information: for marine risks, this could mean incorrect shipping routes; for energy, misinterpreting infrastructure risks; and for cyber, something as simple as faulty vulnerability assessments,” Grant warns.
The human edge
So what’s the way forward? For Grant, collaboration and careful oversight are key. Initiatives like the Lloyd’s Lab – a collaborative innovation hub where insurers, technologists and academics explore new ideas for the market – offer a space to test AI solutions, pinpoint where they add value and flag potential errors before implementation.
Keeping the ‘human in the loop’ is essential at every stage. “We need risk officers to play a leading role in defining how far we augment, rather than replace, human judgement, and in ensuring that approach aligns with the market’s overall risk appetite,” Grant says.
In the end, nothing replaces human insight. Bold claims about AI agents may grab headlines, but when it comes to the high-stakes decisions that define specialist insurance, the real edge still starts – and stays – with people.