What insurance companies need to know about the Fundamental Rights Impact Assessment (FRIA) 

The EU Artificial Intelligence Act (EU AI Act) is ushering in a new era of accountability and transparency for organizations deploying AI systems – particularly in high-impact sectors like insurance. The Fundamental Rights Impact Assessment (FRIA) is at the heart of this shift, a key requirement under Article 27 of the Act.  

For insurers using AI to streamline underwriting, set premiums or assess risk, understanding and preparing for the FRIA isn't just a compliance exercise – it's essential to maintaining trust, ensuring fairness and protecting the rights of your customers. 

Why insurance providers must pay attention 

The EU AI Act specifically identifies insurance as a high-risk sector. More precisely, Annex III, point 5(c) of the regulation applies to AI systems used for risk assessment and life and health insurance pricing. If your company uses AI models to calculate premiums, assess eligibility or segment customer risk profiles, you should conduct a FRIA to evaluate potential biases and ensure responsible deployment.

AI Ethics: What it is and why it matters

What the FRIA means for your organization 

The FRIA requires a structured analysis of how your AI systems impact individuals' fundamental rights. For insurance companies, this involves examining whether automated decisions could result in discrimination, unjust exclusions or lack of transparency for certain customer groups.  

For example, if your system uses health data, geographic information or behavioral metrics to adjust pricing, you'll need to assess how those features might disproportionately affect individuals based on age, disability, socio-economic status or other protected characteristics.  

Importantly, this assessment isn't a task for one team alone. It requires coordinated input from: 

  • Compliance and legal teams to interpret the regulatory requirements and document alignment,
  • Risk and actuarial departments to evaluate the potential for harm and define risk thresholds,
  • Data scientists and IT teams to explain the model logic and technical safeguards,
  • Customer experience and operations to provide insights into real-world use and customer impact,
  • Senior leadership to ensure strategic oversight and adequate resourcing.
Fig. 1: A monitoring dashboard looks like this: It displays AI models, systems, and use cases currently in production, categorized by risk level and key compliance criteria such as human oversight, system monitoring, conformity assessment and AI content transparency.

Key elements of the FRIA in an insurance context 

Article 27 outlines six essential components every FRIA must include – each with particular relevance to insurance companies: 

  1. System usage: Clearly explain how AI is used, such as to score individuals based on health risk factors or behavioral data to determine premiums.
  2. Usage timeline: Indicate when and how frequently the system operates. Does it assess risk at the point of application, continuously during the policy term, or only at renewal?
  3. Affected individuals: Identify customer segments that may be impacted, especially those who are potentially vulnerable, such as people with chronic health conditions or older adults.
  4. Potential harms: Explore how your AI system might lead to biased outcomes, such as unjust premium increases or coverage denials.
  5. Human oversight: Detail how decisions are reviewed or overridden, particularly in borderline or sensitive cases. This could involve setting confidence thresholds or requiring human review of decisions that negatively impact applicants.
  6. Remediation measures: Explain what happens if something goes wrong. Do you have clear procedures for customers to contest a decision? How do you handle corrections?

Compliance is ongoing, not one-off 

Completing an FRIA is not a box-checking exercise. Insurance providers must notify relevant supervisory authorities once the assessment is finalized and update it whenever the AI system, data inputs or risk models change. 

Furthermore, if your organization already conducts Data Protection Impact Assessments (DPIAs) under the GDPR – particularly relevant when processing sensitive health data – your FRIA can build on this foundation. Article 27(4) encourages using existing DPIAs as a baseline to avoid duplicating work. 

White paper: Pioneering Ethical AI: The Crucial Role of Property and Casualty Insurers

Overcoming industry-specific challenges 

The insurance industry faces several unique challenges when implementing FRIA. There's also a tension between risk-based pricing and fairness – particularly where actuarial accuracy may inadvertently disadvantage certain groups.  

Internal silos between underwriting, compliance and data science teams can further complicate the picture. With frequent updates to models and data inputs, maintaining an up-to-date assessment is a resource-intensive task. 

To navigate these complexities, insurers should focus on building strong internal governance frameworks, investing in explainability tools and fostering collaboration between departments. Partnering with AI governance experts and adopting purpose-built tooling can significantly ease the burden. 

Fig. 2: Interactive model governance dashboard showing the full AI model lifecycle – from editing and pre-assessment to retirement – alongside key insights like materiality assessments, risk tiers and performance ratings.

Supporting ethical and compliant AI in insurance 

As AI continues to transform insurance, the FRIA offers an opportunity to meet regulatory expectations and build more transparent, fair and accountable systems. It's a chance to show customers that their rights are protected, even when decisions are made at the speed of algorithms.  

If your organization is preparing for the EU AI Act and needs support aligning with FRIA requirements, we're here to help.  

Stay on top of what's happening with insurance today – and how data and AI can help.

View our insurance trends

Filippo Prazzoli also contributed to this blog post
Share

About Author

Claudio Senatore

Insurance Sr. Gloabal Solution Leader

Claudio is a distinguished actuary who serves as senior global insurance solution leader within the risk, fraud and compliance solutions team at SAS Institute. As a dedicated member of the Italian Actuarial Association, he actively collaborates with both the international and European actuarial associations, especially in the data analytics and AI areas. He is vice chair of the data and AI working group of the Actuarial Association of Europe. With a diverse background in consultancy, direct insurance, and reinsurance, Claudio's expertise spans multiple domains, including insurance data analytics, property and casualty ratemaking, as well as explainable and ethical artificial intelligence.

Leave A Reply

Back to Top