Your AI System Triggers Both GDPR and the AI Act. Here Is How to Handle It.
Two Regulations, One AI System, Zero Guidance
If your company uses AI to score credit applications, screen job candidates, or triage patient symptoms, congratulations: you now answer to two European regulations at once. The GDPR has governed your personal data processing since 2018. The EU AI Act adds a second layer of obligations for high-risk AI systems, with Annex III enforcement starting August 2, 2026.
Here is the problem nobody is talking about clearly: these two frameworks overlap. An AI system processing personal data for a high-risk use case triggers a Data Protection Impact Assessment (DPIA) under GDPR Article 35 AND a Fundamental Rights Impact Assessment (FRIA) under AI Act Article 27. Different legal bases. Different scope. Different enforcement bodies. Same system.
Most companies are treating GDPR compliance and AI Act compliance as two separate projects run by two separate teams. That is the wrong approach, and it is going to cost them.
The EDPB Finally Clarified AI Training Data Rules
For years, the biggest question in AI compliance was whether you could train models on personal data without explicit consent. The EDPB answered in Opinion 28/2024 (December 2024): yes, legitimate interest CAN serve as a legal basis for AI model training. But only if you pass a three-step balancing test.
The test requires you to:
- Identify a legitimate interest that justifies processing. "We want a better model" is not specific enough. "Fraud detection for banking transactions" is.
- Prove necessity. Could you achieve the same result with less data or synthetic data? If yes, you fail this step.
- Balance against data subject rights. The EDPB explicitly requires "enhanced transparency" beyond standard GDPR Articles 13 and 14, pseudonymization, and opt-out mechanisms from the start.
The Italian Garante's EUR 15 million fine against OpenAI (December 2024) shows what happens when you skip this test. OpenAI trained ChatGPT on personal data without identifying a legal basis first. The Garante found violations of transparency, legal basis, and age verification requirements all at once.
The EDPB opinion also introduced a critical concept: AI models CAN be considered anonymous if "identification and data extraction are very unlikely." That matters because anonymous models fall outside GDPR scope entirely. But proving anonymity requires technical evidence, not just a statement in your privacy policy.
Where the AI Act Adds a Second Layer
The AI Act does not replace GDPR. It stacks on top of it. If your AI system processes personal data AND falls under Annex III high-risk categories (employment, credit scoring, biometric identification, healthcare triage, law enforcement), you face both sets of obligations simultaneously.
Here is where it gets concrete:
Training data quality (AI Act Article 10): High-risk systems must use training datasets that are "relevant, representative, error-free, and complete." GDPR's data minimization principle says collect as little as possible. AI Act Article 10 says your data must be comprehensive enough to avoid bias. These pull in opposite directions. You need a data governance framework that satisfies both, which means documenting why each data category is necessary for model quality while minimizing what you actually retain.
Impact assessments (GDPR Article 35 + AI Act Article 27): A DPIA evaluates privacy risks to data subjects. A FRIA evaluates risks to all fundamental rights in the EU Charter, including non-discrimination, fair trial, and human dignity. The good news: Article 27(4) of the AI Act explicitly allows you to reuse your DPIA as input to the FRIA. In practice, a thorough DPIA covers roughly 30-40% of what a FRIA requires, especially around data processing risks and data subject impacts.
Documentation and transparency: Both regulations require extensive documentation, but the formats differ. GDPR requires records of processing activities (Article 30) and a DPIA report. The AI Act requires technical documentation under Annex IV covering everything from system architecture to training data provenance. Running these as separate documentation projects means duplicating work. Running them as a unified process saves weeks.
The EDPB Is Warning That the Omnibus Weakens GDPR Protections
The EDPB-EDPS Joint Opinion 1/2026 (January 2026) raised a specific alarm about the Digital Omnibus proposal. The Omnibus introduces a new Article 4a that would extend the legal basis for processing special category data (GDPR Article 9 data, the sensitive stuff: health, biometrics, political opinions, sexual orientation) for bias detection and correction. And it would apply not just to high-risk AI systems, but to ALL AI systems and models.
That means a chatbot provider could theoretically process health data for bias testing under the AI Act's legal basis, even though GDPR Article 9 normally restricts that processing to a narrow set of conditions.
The EDPB is pushing back hard. Their concern: this creates a backdoor around GDPR protections that took years to establish. The trilogue outcome is uncertain, but companies should plan for the strictest interpretation. If you process special category data with AI, assume you need explicit consent or a specific exemption under both frameworks.
How to Build a Unified Compliance Process
Stop running two parallel projects. Here is what a unified AI Act GDPR assessment looks like in practice:
Step 1: AI System Inventory. Map every AI system that processes personal data. For each system, document: what personal data categories it processes, what legal basis you rely on (GDPR), whether it falls under Annex III (AI Act), and who the deployer and provider are (they may have different obligations).
Step 2: Unified Impact Assessment. Start with a DPIA. Then extend it to cover FRIA requirements. The FRIA adds fundamental rights beyond privacy: non-discrimination, access to justice, freedom of expression. Add sections for AI-specific risks: model drift, data quality degradation, adversarial attacks. One document, two compliance checkboxes.
Step 3: Training Data Audit. For each dataset used in model training or fine-tuning, document: the legal basis for collection, whether the EDPB three-step balancing test passes, retention periods, and bias assessment results. Cross-check that your data minimization approach (GDPR) does not compromise data representativeness (AI Act Article 10).
Step 4: Technical Documentation. Build Annex IV documentation that references your DPIA findings. Include your data governance practices, bias testing methodology, and human oversight mechanisms. This documentation serves both the AI Act conformity assessment AND your GDPR accountability obligations.
Step 5: Ongoing Monitoring. GDPR requires periodic review of DPIAs for significant changes. The AI Act requires post-market monitoring for high-risk systems. Set up a single monitoring pipeline that flags both data protection incidents and model performance degradation.
The Sectors Hit Hardest
Three sectors sit squarely in the overlap zone:
Healthcare: AI systems for patient triage or diagnostic support process special category health data (GDPR Article 9) and fall under Annex III (AI systems as safety components of medical devices). Dual DPIA/FRIA mandatory.
Financial services: Credit scoring and fraud detection AI processes financial data that often includes personal data. Annex III explicitly lists AI systems for creditworthiness assessment. The CNIL's 2025 recommendations on legitimate interest for AI training provide the clearest practical guidance for this sector.
HR and recruitment: AI-assisted hiring tools process candidate data and fall under Annex III (employment, worker management). The EDPB-EDPS Joint Opinion specifically flags bias detection in employment AI as a collision point between the two frameworks.
What This Means For Your Compliance Team
The companies that will struggle most are the ones treating GDPR and AI Act as separate compliance silos. Two teams, two assessments, two documentation sets, twice the cost, and gaps in between where neither team catches the risk.
The companies that will move fastest are the ones building a unified compliance function that handles both frameworks from a single inventory, produces integrated assessments, and monitors both data protection and AI performance in one pipeline.
If your AI systems process personal data in healthcare, finance, or HR, start with the unified impact assessment. It is the single highest-value compliance action you can take right now, because it satisfies requirements from both regulations simultaneously. One assessment, not two.
We have been running these integrated assessments for European companies deploying AI in regulated sectors. If you are staring at two separate compliance projects and wondering how to merge them, that is exactly the problem we solve.
About DeviDevs: We build ML platforms, secure AI systems, and help companies comply with the EU AI Act. devidevs.com