Fundamental Rights Impact Assessment (FRIA): Complete Guide for AI Deployers
If you deploy a high-risk AI system in the EU, you are legally required to assess its impact on fundamental rights before putting it into use. This obligation comes from Article 27 of the EU AI Act (Regulation (EU) 2024/1689), and it applies to a broad range of organizations - from banks using AI credit scoring to municipalities deploying predictive policing tools.
Yet most organizations have never heard of FRIA. They know DPIA from GDPR. They know risk assessments from ISO 27001. But the Fundamental Rights Impact Assessment is a distinct legal requirement with its own scope, methodology, and reporting obligations.
This guide covers everything you need to know: what FRIA is, who must conduct one, how it differs from a DPIA, the step-by-step process, and what happens if you skip it.
What Is a Fundamental Rights Impact Assessment?
A FRIA is a structured evaluation of how a high-risk AI system might affect the fundamental rights of individuals and groups. It is not a technical audit. It is not a performance benchmark. It focuses on the human impact - discrimination, privacy, dignity, access to services, freedom of expression, and other rights protected under the EU Charter of Fundamental Rights.
The legal basis is Article 27 of the EU AI Act, which requires deployers of high-risk AI systems to conduct an assessment of the system's impact on fundamental rights before first use.
What fundamental rights are in scope?
The EU Charter of Fundamental Rights covers 50 articles organized into six titles. The ones most relevant to AI systems include:
- Non-discrimination (Art. 21) - Does the system treat people differently based on protected characteristics?
- Privacy and data protection (Art. 7, 8) - How does the system handle personal data?
- Human dignity (Art. 1) - Does the system respect human dignity in automated decisions?
- Right to an effective remedy (Art. 47) - Can affected individuals challenge AI decisions?
- Freedom of expression (Art. 11) - Does content moderation AI restrict speech?
- Rights of the child (Art. 24) - How does the system impact minors?
- Workers' rights (Art. 31) - Does the system affect working conditions?
- Access to services (Art. 36) - Does the system create barriers to essential services?
A proper FRIA evaluates each right that could be affected, not just the obvious ones.
Who Must Conduct a FRIA?
Article 27 places the FRIA obligation on deployers - organizations that use AI systems under their authority. This is distinct from providers (who build the systems).
Mandatory for these deployers:
- Public bodies and private entities performing public functions that use high-risk AI systems
- Any deployer of high-risk AI systems listed in Annex III of the AI Act, specifically:
- Biometric identification and categorization
- Critical infrastructure management
- Education and vocational training (access, assessment)
- Employment (recruitment, evaluation, monitoring)
- Access to essential services (credit scoring, insurance, social benefits)
- Law enforcement (risk assessment, polygraph, evidence analysis)
- Migration and border control
- Administration of justice
Key distinction: deployer vs. provider
The provider (the company that built the AI model) has separate obligations around technical documentation, conformity assessment, and CE marking. The deployer - even if they bought the system off the shelf - must independently assess its impact on fundamental rights in their specific context of use.
A bank using a third-party AI credit scoring system cannot point to the provider's conformity assessment and claim compliance. The bank must conduct its own FRIA based on how it deploys the system, what population it affects, and what safeguards it applies.
FRIA vs. DPIA: Same Thing or Different?
If you have done a Data Protection Impact Assessment under GDPR Article 35, you might wonder whether FRIA is just the same exercise with a new name. It is not. Here is a concrete comparison:
| Aspect | DPIA (GDPR Art. 35) | FRIA (AI Act Art. 27) | |--------|---------------------|----------------------| | Legal basis | Regulation (EU) 2016/679 | Regulation (EU) 2024/1689 | | Focus | Personal data processing risks | Fundamental rights impact (broader) | | Scope | Data protection and privacy | Discrimination, dignity, access, expression, and more | | Trigger | High-risk data processing | High-risk AI deployment | | Who conducts | Data controller | AI deployer | | Supervisory body | Data Protection Authority | National AI supervisory authority | | Consultation | DPA consultation for high residual risk | Results notified to market surveillance authority | | When | Before processing starts | Before first deployment | | Updates | When processing changes materially | When inputs or context change materially |
Where they overlap
Both assessments share some ground:
- Both require you to identify risks before deployment
- Both demand documented mitigation measures
- Both need updating when circumstances change
- Both consider the rights of affected individuals
Where they diverge
FRIA goes beyond data protection. A DPIA asks: "Does this processing respect privacy and data protection principles?" A FRIA asks: "Does this AI system affect any fundamental right - including rights that have nothing to do with personal data?"
For example, an AI system that recommends which neighborhoods receive more police patrols might not process personal data at all. No DPIA needed. But it could affect freedom of movement, non-discrimination, and dignity for residents of those neighborhoods. A FRIA is mandatory.
Can you combine them?
Yes. Article 27(4) explicitly allows deployers to combine FRIA with an existing DPIA. If your high-risk AI system also involves high-risk data processing, you can run a single assessment that covers both - provided you address the full scope of fundamental rights, not just data protection.
This is the approach we recommend to most clients. Running parallel assessments creates duplication. A unified assessment that covers GDPR Article 35 and AI Act Article 27 is more efficient and produces a more coherent risk picture. We cover this in detail in our guide on GDPR and AI Act dual compliance.
Step-by-Step: How to Conduct a FRIA
Step 1: Scope the assessment
Before anything else, define what you are assessing:
- Which AI system? Identify the specific system, version, and provider.
- What is the use case? Describe exactly how you plan to deploy it. The same system can be low-risk in one context and high-risk in another.
- Who is affected? Identify the individuals and groups who will be subject to the system's outputs or decisions.
- What is the deployment context? Geography, sector, scale, and whether decisions are fully automated or human-reviewed.
Step 2: Map applicable fundamental rights
Go through the EU Charter of Fundamental Rights systematically. For each right, ask: "Could this AI system, in our specific deployment context, affect this right - positively or negatively?"
Document every right that is potentially affected, even if the risk seems low. The assessment should show you considered the full spectrum, not just the obvious candidates.
Step 3: Assess the impact
For each affected right, evaluate:
- Likelihood: How probable is it that the system will negatively impact this right?
- Severity: If the impact occurs, how serious is it for affected individuals?
- Scale: How many people could be affected?
- Reversibility: Can the impact be undone? Denying someone a loan is reversible. A wrongful arrest based on facial recognition is not.
- Vulnerability: Are any affected groups particularly vulnerable (children, elderly, disabled, minorities)?
Use a structured scoring system - not arbitrary labels. A 5-point scale for likelihood and severity, multiplied to produce a risk score, gives you a basis for prioritization.
Step 4: Identify existing safeguards
Document what protections are already in place:
- Human oversight mechanisms (who reviews AI decisions, and how?)
- Bias testing and monitoring (what metrics, how often?)
- Complaint and redress procedures (can affected people challenge decisions?)
- Data quality controls (training data audited for bias?)
- Transparency measures (are people told they are subject to AI?)
Step 5: Define additional mitigation measures
Where existing safeguards are insufficient, define new ones. Be specific:
- Bad: "We will monitor for bias."
- Good: "Monthly demographic parity analysis across gender, age, and ethnicity using the Fairlearn toolkit, with automated alerts when disparate impact ratio drops below 0.8, reviewed by the AI governance committee within 5 business days."
Each mitigation measure should have an owner, a timeline, and a success metric.
Step 6: Calculate residual risk
After mitigation, reassess each risk. If residual risk remains high for any fundamental right, you have three options:
- Add more safeguards until residual risk is acceptable
- Restrict the deployment (narrower scope, more human oversight)
- Do not deploy the system
Article 27 does not set an explicit threshold for acceptable residual risk. But if your assessment shows significant unmitigated impact on fundamental rights, deploying the system exposes you to enforcement action and liability.
Step 7: Document and notify
The FRIA must be documented and its results submitted to the relevant market surveillance authority. Article 27(1) requires deployers to notify the authority of the assessment results.
Your documentation should include:
- System identification and provider details
- Deployment context and affected population
- Rights impact analysis (each right, likelihood, severity, scale)
- Existing and additional safeguards
- Residual risk assessment
- Decision to deploy (with justification)
- Monitoring plan for ongoing assessment
Step 8: Establish ongoing monitoring
A FRIA is not a one-time exercise. You must update it when:
- The AI system is updated or retrained
- The deployment context changes (new geography, new user group)
- Monitoring reveals unexpected impacts
- Complaints or incidents indicate rights violations
- Regulatory guidance or case law changes interpretation
FRIA Documentation Template
Here is a practical framework for structuring your FRIA document:
FRIA Document Structure:
1. Executive Summary:
- System name and version
- Deployer organization
- Assessment date and assessor
- Key findings and overall risk rating
2. System Description:
- Provider and technical specifications
- AI technique (ML model type, training approach)
- Input data sources and types
- Output types and decision scope
- Integration with existing processes
3. Deployment Context:
- Use case description
- Geographic scope
- Affected population (size, demographics)
- Automation level (fully automated vs. human-in-the-loop)
4. Fundamental Rights Analysis:
- For each potentially affected right:
- Right description and Charter article
- How the system could impact this right
- Likelihood score (1-5)
- Severity score (1-5)
- Scale of impact
- Vulnerability of affected groups
- Risk score (likelihood x severity)
5. Safeguards and Mitigation:
- Existing safeguards per right
- Additional measures required
- Owner and timeline for each measure
- Residual risk after mitigation
6. DPIA Integration (if applicable):
- Reference to existing DPIA
- Additional data protection measures
- DPO involvement and sign-off
7. Decision and Justification:
- Deploy / deploy with conditions / do not deploy
- Justification for residual risk acceptance
- Conditions and restrictions
8. Monitoring Plan:
- Metrics to track
- Frequency of review
- Trigger conditions for reassessment
- Reporting chain
9. Stakeholder Consultation:
- Groups consulted
- Feedback received
- How feedback was incorporated
10. Annexes:
- Technical documentation from provider
- Bias test results
- Complaint procedures
- Authority notification recordsPenalties for Non-Compliance
The AI Act enforcement regime is tiered. For FRIA-related violations:
- Failure to conduct a FRIA when required: fines up to 15 million EUR or 3% of global annual turnover
- Inadequate FRIA (missing rights, insufficient analysis): subject to corrective measures and potential fines
- Failure to notify the market surveillance authority: administrative penalties
These are maximums. National authorities will consider the nature, gravity, and duration of the infringement, the size of the organization, and whether it has taken corrective action.
But fines are not the only risk. An AI system deployed without a FRIA that causes fundamental rights harm exposes the deployer to civil liability claims from affected individuals. The financial exposure from a class of affected people can exceed regulatory fines.
Common Mistakes in FRIA Assessments
Having reviewed dozens of impact assessments across industries, these are the patterns that create problems:
-
Treating FRIA as a checkbox. Filling in a template without genuine analysis. Authorities will look at the depth and specificity of your assessment.
-
Only assessing obvious rights. Credit scoring systems that only assess non-discrimination but ignore access to services, dignity, and effective remedy.
-
Copy-pasting the provider's documentation. The provider's conformity assessment is about the system in general. Your FRIA is about how you deploy it, in your context, affecting your users.
-
No stakeholder consultation. Article 27 does not explicitly require public consultation, but assessments that include input from affected communities are significantly stronger and more credible.
-
No update plan. A FRIA written at deployment and never revisited becomes stale within months as the system evolves and context changes.
Frequently Asked Questions
Does FRIA apply to AI systems already in production?
Yes. The AI Act includes transitional provisions, but deployers of existing high-risk AI systems must conduct a FRIA before the relevant compliance deadlines. For most high-risk systems in Annex III, this deadline is August 2, 2026. Systems already deployed must be assessed retroactively.
Can we hire a third party to conduct our FRIA?
Yes. Nothing in Article 27 requires the deployer to conduct the FRIA internally. You can engage external consultants or law firms. However, the responsibility for the assessment remains with the deployer. If the third party produces an inadequate FRIA, the deployer faces the regulatory consequences - not the consultant.
How does FRIA interact with the provider's conformity assessment?
They are complementary but distinct. The provider conducts a conformity assessment (Article 43) to demonstrate the system meets technical requirements. The deployer conducts a FRIA (Article 27) to assess how the system affects fundamental rights in their specific deployment. The provider's technical documentation should feed into your FRIA, but it does not replace it.
What if our AI system processes no personal data - do we still need a FRIA?
Yes, if the system is high-risk under Annex III. FRIA covers all fundamental rights, not just privacy and data protection. An AI system that makes decisions about resource allocation, infrastructure management, or public services can affect fundamental rights without processing any personal data.
Next Steps
If you deploy high-risk AI systems in the EU, FRIA compliance is not optional and the deadline is approaching. The organizations that start now will have a structured process in place before enforcement begins. Those that wait will scramble.
Start with our free EU AI Act Risk Assessment to determine which of your AI systems are classified as high-risk and whether you need a FRIA.
If you need help conducting a FRIA or building a combined FRIA/DPIA framework, check our compliance and AI security services. We work with organizations across the EU to build assessment processes that satisfy regulators and protect the people affected by AI systems.
For a broader overview of your obligations, read our EU AI Act Compliance Guide.