Fraud influences everything at insurance companies, from operational costs to customer trust. And fraud is especially concerning today because we know that committing fraud is easier than ever with AI.

What can insurers do to keep pace with increasingly sophisticated fraudsters who are becoming experts at committing fraud by using digital tools and AI technologies? The short answer: insurers need to evolve their defenses by adding these same technologies – including agentic AI – to their toolboxes.

AI agents can effectively automate complex analysis and rapidly adapt to new threats. In turn, they can overhaul the landscape of insurance fraud detection and management.

According to the Coalition Against Insurance Fraud:

  • Insurance fraud steals at least $308.6 billion every year from American consumers.
  • Fraud occurs in about 10% of property-casualty insurance losses.

Read more

What is insurance fraud?

Insurance fraud comes in many forms: claims fraud, application fraud, health care fraud and even internal fraud. Claims fraud is the most prevalent, particularly for property and casualty (P&C) insurance. For P&C insurers, high policy volumes and relatively small claim amounts create fertile ground for fraudulent activity.

Application fraud – where applicants hide relevant information to secure coverage – is most common in health and life insurance. Indeed, health care fraud is problematic, with the National Health Care Anti-Fraud Association (NHCAA) estimating financial losses in the tens of billions of dollars annually. Some government and law enforcement agencies place the losses as high as 10% of our annual health outlay, which could exceed $300 billion.

But ultimately, the most significant insurance fraud is tied to the claims process.

Managing insurance claims fraud

Insurers know fraud cannot be entirely prevented; they accept a certain level of fraudulent activity as inevitable and price it into their products. Their goal is to strike the right balance – be vigilant enough to catch the most significant fraud but not so strict that honest customers feel mistrusted or alienated.

Excessive fraud, of course, is bad for the business and for customers. It leads to higher payouts, increased reserves and ultimately higher premiums. All this can make insurers less competitive. It could even push away good customers – and, in turn, degrade the overall risk pool.

The rising tide of sophisticated fraud

As fraud techniques become more sophisticated, spurred by the rise of generative AI (GenAI), fraudsters have new ways to commit fraud. For example, it’s easier for them to:

Advances in AI technology complicate fraud detection and require insurers to constantly try to stay one step ahead of the fraudsters.

DB Insurance was catching individual fraudsters using manual methods. But with 10 million customers generating millions of claims, the patterns existed on a scale too vast for human investigators to discern. See how AI and network analytics transformed fraud detection at DB Insurance.

Read the full story

Agentic AI: Automating claims triage and investigations

Traditional fraud detection relies heavily on databases and manual network analysis. Today, many insurers incorporate AI-driven solutions that automate image and text analysis, advanced network analysis and claims classification. Agentic AI brings a new level of automation and intelligence to these same processes.

In day-to-day processes, AI agents can automatically analyze text and images in submitted claims, uncover networked schemes involving multiple parties, request additional information if needed, and score claims based on risk factors. Straightforward claims can be approved quickly, while suspicious ones are escalated for expert review.

Automation not only speeds up the claims process for honest customers, it also allows human investigators to focus their expertise on the most complex or high-risk cases.

Perhaps the most powerful aspect of AI is its ability to learn from new fraud patterns. When fraudsters develop unexpected tactics – such as using synthetic identities or exploiting emerging technologies – AI systems can adapt by recognizing new anomalies and updating detection models.

Four examples of how AI agents can automate the insurance claims process

1. Automated requests

If information in a claim is missing or unclear, AI can automatically identify the gaps and trigger requests for additional documentation or clarification.

2. Text and image analysis

AI agents can automatically analyze text and images submitted with a claim, flagging inconsistencies or anomalies that may indicate fraud. For example, imagine a claim with an image of broken glasses. What if this image had been published in the news or social media previously, or used in other fraudulent claims submissions?

An agentic AI system could automatically analyze the image, compare it against multiple databases of known images (collected over a period of many years), and flag duplicates or manipulated photos.

3. Network analysis

By examining connections between claims, policyholders and third parties, AI can rapidly identify suspicious patterns – such as repeated involvement of the same entities across multiple claims. In auto insurance, for example, fraud often involves networks of parties working together to submit false claims – garages, dealerships and policyholders.

Using advanced network analysis, agentic AI can map relationships among these parties to uncover patterns, such as the same garage appearing in multiple suspicious claims. Without AI, these organized fraud rings might go undetected.

4. Claims scoring and classification

AI agents can score claims based on established risk factors, then automatically approve straightforward cases. They can also reject obviously fraudulent claims while escalating suspicious claims to (human) experts for further review.

Balancing AI technology with human expertise

As technology automates tasks, it’s important to recognize that human experts are extremely valuable. Said another way: AI should augment – not replace – human judgment.

For example, insurers need experienced claims managers to train AI models, set benchmarks and oversee claims decisions – especially in complex or ambiguous cases. Insurance experts know how to balance different factors when evaluating potential fraud. This includes weighing competing requirements, such as reducing operational costs, managing fraud payouts and maintaining customer trust.

Establishing the right balance will improve outcomes while ensuring appropriate accountability throughout the process.

Many straightforward insurance claims will be settled in minutes by agentic AI. In order to safeguard customers’ trust, though, insurers will need strong AI governance. That means ensuring that their AI platform has the security controls and governance to minimize risks, from accidental bias in claims decisions to exposure to cyberattacks. The companies that install robust AI governance will earn and protect that trust. Building systems that act fast – and act right – will define the leaders of the next decade. Alena Tsishchanka, Global Customer Advisory Director

Read our 2026 insurance trends news release

A strong data foundation and transparency are key to effective AI

For agentic AI to succeed, insurers should rely on high-quality, comprehensive data while adhering to strict data privacy and security requirements. Incomplete or biased data could lead to ineffective or even harmful outcomes. For example, regulations may require gender-neutral pricing – and AI models that are not aligned with this would create legal issues.

AI systems should also be carefully designed and monitored to avoid inadvertently reinforcing existing biases. Insurers need to regularly audit their models and data sources to ensure decisions are based on relevant risk factors rather than protected characteristics.

It’s crucial to provide transparency into how decisions are made, for both customers and regulators. Agentic AI systems should provide clear explanations for why a claim is flagged as suspicious or denied. Human experts who are involved should oversee outcomes and provide recourse for customers who wish to challenge a decision.

Agentic AI requires careful planning and maintenance

Deploying agentic AI is not a one-time event – it’s an ongoing process that demands strong governance. Insurers need to test systems thoroughly before full deployment, monitor performance continuously, and be prepared to adapt as new risks or AI ethics challenges emerge. Premature or poorly governed deployment can lead to reputational damage, regulatory penalties and loss of customer trust.

Learn more about making decisions you can trust with agentic AI

Share

About Author

Thorsten Hein

Principal Product Marketing Manager

Thorsten Hein is a Principal Product Marketing Manager in the Risk Research and Quantitative Solutions Division at SAS Institute. He specialises in global risk management operations insights in both banking and insurance, focusing on risk and finance integration, IFRS, Solvency regulations and regulatory reporting. He helps risk management stakeholders to go beyond pure regulatory compliance and drive value-based management to maximise business performance, using his wide experience to deliver both business relevance and technical coherence.

Leave A Reply