EU AI Act

EU AI Act vs GDPR: What Your DPO Needs to Know

Petru Constantin
13 min read
#eu-ai-act#gdpr#compliance#dpo#data-protection

EU AI Act vs GDPR: What Your DPO Needs to Know

Your DPO Already Knows Half of What is Coming

If your organization has a functioning Data Protection Officer, you are not starting from zero on the EU AI Act. The GDPR built a compliance muscle that most companies never had before 2018: documented processing activities, impact assessments, supervisory authority relationships, data subject rights workflows. The AI Act builds on top of that foundation.

But "builds on top" does not mean "is the same thing." The AI Act introduces concepts your DPO has never dealt with: risk classification for AI systems, conformity assessments, technical documentation under Annex IV, and a human oversight requirement that goes well beyond anything GDPR Article 22 asks for.

This article breaks down the two regulations side by side. Where they overlap, where they diverge, and where your DPO needs to expand their skillset or bring in specialists.

Scope: What Each Regulation Covers

GDPR applies to personal data processing. If your system touches personal data in any way - collecting it, storing it, analyzing it, making decisions based on it - GDPR applies. It does not matter whether AI is involved or not.

The EU AI Act applies to AI systems regardless of whether they process personal data. A predictive maintenance algorithm analyzing vibration data from factory equipment falls under the AI Act even though no personal data is involved. An AI system that scores employee performance falls under both.

Here is the practical impact: your DPO's current register of processing activities (GDPR Article 30) covers only the data angle. The AI Act requires a separate AI system inventory that maps every AI system by risk category - unacceptable, high-risk, limited, or minimal risk. Many high-risk AI systems in Annex III also process personal data, which means dual obligations.

What your DPO needs to do: Build an AI system inventory that cross-references with the existing ROPA. Every AI system that processes personal data should appear in both registers. Every AI system that falls under Annex III needs additional documentation regardless of data involvement.

Risk Approach: DPIA vs Fundamental Rights Impact Assessment

Your DPO runs DPIAs. They know GDPR Article 35: when processing is likely to result in a high risk to the rights and freedoms of natural persons, you assess the impact before you start processing.

The AI Act introduces the Fundamental Rights Impact Assessment (FRIA) under Article 27. It applies to deployers of high-risk AI systems. The FRIA is broader than a DPIA. While a DPIA focuses on privacy and data protection risks, the FRIA evaluates risks to all fundamental rights in the EU Charter of Fundamental Rights: non-discrimination (Article 21), human dignity (Article 1), freedom of expression (Article 11), right to an effective remedy (Article 47), and more.

A practical comparison:

DPIA (GDPR Article 35):

  • Triggered by high-risk personal data processing
  • Focuses on privacy risks to data subjects
  • Evaluates necessity and proportionality of processing
  • Requires consultation with the DPO (Article 35(2))
  • Must be done before processing begins
  • Reviewed when processing changes

FRIA (AI Act Article 27):

  • Triggered by deploying a high-risk AI system (Annex III)
  • Covers all EU Charter fundamental rights, not just privacy
  • Evaluates impact on affected groups, including indirect effects
  • Requires input from multiple stakeholders, not just the DPO
  • Must be completed before first deployment
  • Must be updated when significant changes occur

The good news: Article 27(4) explicitly allows you to combine the FRIA with an existing DPIA. If you already run a thorough DPIA on an AI system that processes personal data, roughly 30-40% of the FRIA work is already done - specifically the data protection risk sections and the assessment of impact on individuals.

The bad news: the other 60-70% is new territory. Evaluating risks to non-discrimination, access to justice, or democratic participation is not something most DPOs have been trained to do. This is where legal counsel with AI expertise or an external compliance partner adds value.

For a deeper look at running unified assessments, see our guide on GDPR and AI Act dual compliance.

Documentation Requirements: Similar Philosophy, Different Outputs

Both regulations are documentation-heavy. Your DPO is used to maintaining Records of Processing Activities (Article 30), DPIA reports, data breach registers, consent records, and data subject request logs. The AI Act demands its own documentation stack.

GDPR documentation your DPO already maintains:

  • Records of Processing Activities (Article 30)
  • Data Protection Impact Assessments (Article 35)
  • Data breach notification records (Articles 33-34)
  • Data subject request logs (Articles 15-22)
  • Processor agreements and transfers documentation (Articles 28, 46)
  • Legitimate interest assessments (Article 6(1)(f))

AI Act documentation that is new:

  • Technical documentation per Annex IV (system architecture, training data provenance, performance metrics, design choices, known limitations)
  • Conformity assessment results (for high-risk systems)
  • Quality management system documentation (Article 17)
  • AI system registration in the EU database (Article 71)
  • Post-market monitoring plan (Article 72)
  • Logs of automatic recording (Article 19) - retained for minimum 6 months
  • Instructions for use documentation for downstream deployers (Article 13)

The Annex IV technical documentation is the biggest new requirement. It asks for details that live in engineering, not in the DPO's office: model architecture descriptions, training and validation data characteristics, design choices and their rationale, computational resources used, performance metrics across different population subgroups. Your DPO cannot produce this alone. They need a direct line to the AI engineering team.

What your DPO needs to do: Establish a documentation workflow where the DPO owns the GDPR layer and the AI compliance team (or designated person) owns the AI Act layer, with clear handoff points. Do not duplicate effort. A unified document management system that maps requirements from both regulations to the same AI system saves significant time.

Oversight: DPO Role vs Human Oversight Under Article 14

Under GDPR, the DPO's role is defined in Articles 37-39: inform and advise, monitor compliance, cooperate with the supervisory authority, serve as contact point. The DPO does not make business decisions. They advise.

The AI Act's Article 14 introduces "human oversight" as a design and deployment requirement for high-risk systems. This is fundamentally different from the DPO role. Human oversight under Article 14 means:

  • A natural person can understand the AI system's capabilities and limitations
  • That person can correctly interpret outputs and decide when to override them
  • They can intervene or stop the system (the "stop button" requirement)
  • They are aware of automation bias and can resist it
  • For biometric identification systems, at least two people must verify results before acting

This is not a monitoring role. It is an operational role. The person exercising human oversight needs to understand the AI system technically, not just legally. They need training on how the system works, what its failure modes are, and when to override its outputs.

Can the DPO serve as the human oversight person? Technically, the AI Act does not prohibit it. Practically, it creates a conflict of interest. The DPO advises on compliance. The human oversight person makes operational decisions about when to trust or override AI outputs. Combining these roles dilutes both. For high-risk systems, designate separate individuals.

What your DPO needs to do: Help define who the human oversight persons are for each high-risk AI system. Ensure those persons receive adequate training per Article 14(4). Document the oversight arrangements as part of the quality management system.

Penalties: The Numbers Changed

GDPR penalties are already significant. Your DPO knows the structure:

  • Up to EUR 20 million or 4% of total worldwide annual turnover (whichever is higher) for the most serious violations (Articles 5, 6, 7, 9, 12-22, 44-49)
  • Up to EUR 10 million or 2% of turnover for other violations (Articles 8, 11, 25-39, 42-43)

The AI Act raises the ceiling:

  • Up to EUR 35 million or 7% of total worldwide annual turnover for prohibited AI practices (Article 99(3))
  • Up to EUR 15 million or 3% of turnover for high-risk system violations (Article 99(4))
  • Up to EUR 7.5 million or 1% of turnover for supplying incorrect information to authorities (Article 99(5))
  • SMEs and startups get proportional caps (Article 99(6))

For a company with EUR 500 million in annual revenue, the maximum penalty jumps from EUR 20 million (GDPR) to EUR 35 million (AI Act). And these penalties can stack. A high-risk AI system that processes personal data without proper safeguards could trigger GDPR fines AND AI Act fines from different enforcement bodies.

The practical risk is not just financial. Both regulations allow supervisory authorities to order you to stop processing (GDPR Article 58) or withdraw your AI system from the market (AI Act Article 16(j)). An AI system shutdown can be more damaging to operations than any fine.

Enforcement: Two Regulators, One System

Under GDPR, your DPO deals with one lead supervisory authority (usually the DPA where your main establishment is). The one-stop-shop mechanism (Article 56) simplifies cross-border enforcement.

The AI Act does not replicate this clean structure. Each EU member state must designate a national competent authority for AI (Article 70), and it does not have to be the same body as the DPA. In some countries, the DPA will handle both. In others, a separate authority will handle AI Act enforcement. The AI Office in Brussels handles general-purpose AI models (like GPT-4 or Gemini) directly.

What this means for your DPO: they may need relationships with two supervisory authorities instead of one. The GDPR DPA for data protection matters. The national AI authority for AI Act compliance. When both regulations are triggered by the same system, coordination between authorities is supposed to happen (Article 74(9) of the AI Act), but the mechanisms are still being established.

Timeline: What is Already in Force and What is Coming

Your DPO needs to track these dates:

Already in force:

  • February 2, 2025: Prohibited AI practices ban (Article 5) - unacceptable risk systems are already illegal
  • February 2, 2025: AI literacy obligation (Article 4) - your staff using AI systems must be trained

Coming soon:

  • August 2, 2025: Rules for general-purpose AI models (Chapter V) and governance structures
  • August 2, 2026: Full enforcement of high-risk AI system obligations (Annex III). This is the big one.
  • August 2, 2027: High-risk AI systems that are also regulated products (Annex I) - medical devices, machinery, etc.

August 2, 2026 is the date that matters most for your DPO. Every high-risk AI system deployed after that date must have a FRIA completed, technical documentation ready, quality management systems in place, and human oversight arrangements defined.

That is roughly 16 months from now. If you have not started your AI system inventory, you are behind.

What Your DPO Should Do in the Next 90 Days

Week 1-2: AI System Inventory. Map every AI system in your organization. Cross-reference with your ROPA. Flag systems that fall under Annex III high-risk categories. Prioritize by deployment date and data sensitivity.

Week 3-4: Gap Analysis. For each high-risk AI system, compare current GDPR documentation against AI Act requirements. Identify what is missing: technical documentation, FRIA, quality management system, human oversight arrangements, post-market monitoring plan.

Week 5-8: Organizational Design. Define roles. Who owns AI compliance? Who performs human oversight for each system? How does information flow between the DPO, the AI team, and business owners? Do you need external support for the FRIA or conformity assessments?

Week 9-12: Implementation Start. Begin with your highest-risk AI system. Run a combined DPIA/FRIA. Draft the Annex IV technical documentation with engineering. Set up the quality management framework. Register the system in the EU database.

If this sounds like a lot, it is. GDPR readiness took most organizations 12-18 months. The AI Act adds comparable complexity, but you have a head start because the compliance infrastructure already exists.

Where DeviDevs Helps

We work with DPOs and compliance teams who know GDPR cold but need AI Act expertise. Our EU AI Act risk assessments map your AI systems against Annex III categories, identify dual-regulation systems, and produce unified DPIA/FRIA documentation that satisfies both frameworks.

We also help with the technical documentation that DPOs cannot produce alone - model architecture reviews, training data audits, and bias testing that feeds directly into your Annex IV documentation.

Check our full compliance services or read our detailed walkthrough on running a unified GDPR and AI Act assessment.


Frequently Asked Questions

Does the AI Act replace GDPR for AI systems?

No. The AI Act does not replace GDPR. Both apply simultaneously when an AI system processes personal data. Article 2(7) of the AI Act explicitly states that Union law on data protection continues to apply. Your DPO's GDPR obligations remain unchanged. The AI Act adds a second layer of requirements on top.

Can the DPO also serve as the AI compliance officer?

Legally, yes - the AI Act does not prohibit it. Practically, it depends on the organization's size and risk profile. For organizations with a handful of low-risk AI systems, combining roles may work. For organizations deploying multiple high-risk AI systems, the DPO already has a full-time job. Article 14 human oversight requirements alone demand dedicated personnel who understand the AI systems technically, not just legally.

Do I need a FRIA if I already have a DPIA?

Yes. A DPIA alone does not satisfy the FRIA requirement under Article 27. However, Article 27(4) allows you to reuse DPIA content as input to the FRIA. The FRIA covers fundamental rights beyond data protection - non-discrimination, human dignity, access to justice - that a standard DPIA does not address. Think of it as extending your DPIA, not replacing it.

Which authority enforces the AI Act - my existing DPA or a new body?

It depends on your member state. Each country must designate a national competent authority (Article 70). Some countries will assign this to their existing DPA. Others will create a new body or assign it to a different regulator (for example, a market surveillance authority). As of March 2026, several member states are still finalizing their designations. Your DPO should track this and establish relationships with whichever authority is designated.

What happens if I miss the August 2026 deadline?

High-risk AI systems deployed after August 2, 2026 without proper compliance face penalties of up to EUR 15 million or 3% of worldwide annual turnover. But the bigger risk is operational: authorities can order you to withdraw the AI system from the market entirely. If that system is core to your business operations - say, a credit scoring model or an HR screening tool - the business impact of withdrawal far exceeds the fine.

Are SMEs treated differently under the AI Act?

Yes. Article 99(6) sets proportional penalty caps for SMEs and startups. The AI Office is also developing regulatory sandboxes (Article 57) where smaller companies can test AI systems under regulatory supervision before full deployment. Your DPO should investigate whether your organization qualifies and whether a sandbox is available in your jurisdiction.

Need help with EU AI Act compliance or AI security?

Book a free 30-minute consultation. No commitment.

Book a Call

Weekly AI Security & Automation Digest

Get the latest on AI Security, workflow automation, secure integrations, and custom platform development delivered weekly.

No spam. Unsubscribe anytime.