EU AI Act

AI Act Enforcement in Romania: ANCOM, ASF, ANSPDCP

Petru Constantin
13 min read
#eu-ai-act#romania#ancom#anspdcp#compliance#enforcement

Who Enforces the AI Act in Romania: ANCOM, ASF, ANSPDCP - Complete Guide

On March 12, 2026, Romania's government adopted a memorandum designating the national authorities responsible for enforcing the EU AI Act (Regulation 2024/1689). This was not a surprise - Article 70 required every member state to designate competent authorities by August 2, 2025. Romania missed that deadline by seven months.

But the designations are now official, and they matter. If you operate AI systems in Romania - whether you built them or just deploy them - you now know exactly who will come knocking.

Here is the full breakdown of who does what, what powers they have, and what you should do about it.

The Government Memorandum: What Happened on March 12

The Romanian government approved a memorandum establishing the institutional framework for AI Act enforcement. The document assigns specific roles to existing regulators rather than creating a new AI-specific agency. This is the approach most EU member states have taken - layering AI oversight onto regulators who already understand the sectors where AI gets deployed.

The key designations:

  • ANCOM (National Authority for Management and Regulation of Communications) - market surveillance authority and national contact point
  • ASF (Financial Supervisory Authority) and BNR (National Bank of Romania) - financial sector AI oversight
  • ANSPDCP (National Supervisory Authority for Personal Data Processing) - biometric, law enforcement, migration, justice, and democratic process AI
  • ADR (Agency for Digital Romania) - notification authority for conformity assessment bodies
  • ASRO / CT 401 - AI standardization committee

Let's go through each one.

ANCOM: The Lead Enforcer

ANCOM is Romania's market surveillance authority for the AI Act. This is the big one. If you are an AI provider or deployer in Romania and your system does not fall under a sector-specific regulator, ANCOM is your primary oversight body.

What ANCOM does under the AI Act

Market surveillance. ANCOM will inspect AI systems placed on the Romanian market to verify compliance with the regulation. This includes checking technical documentation, risk management systems, data governance practices, human oversight mechanisms, and conformity declarations.

National contact point. ANCOM represents Romania in coordination with the European AI Office and other member states. When the European Commission needs to communicate with Romania on AI Act matters, ANCOM is the interface.

General-purpose AI (GPAI) monitoring. While the European AI Office has primary oversight of GPAI models (like foundation models and large language models), ANCOM will handle market surveillance of GPAI systems deployed in Romania.

Complaints and reporting. ANCOM will receive complaints about non-compliant AI systems from citizens, organizations, and other authorities. If someone reports that your AI system violates the regulation, ANCOM is where that report lands.

What this means in practice

ANCOM already regulates telecom and electronic communications. They know how to run inspections, issue fines, and enforce compliance deadlines. They are not starting from zero. But AI regulation is a fundamentally different domain from spectrum allocation and telecom licensing, so expect a learning curve - and expect ANCOM to start with the most visible, high-profile AI deployments.

ASF and BNR: Financial Sector AI

The financial sector gets its own AI oversight, split between two regulators:

ASF (Financial Supervisory Authority) covers AI systems used in insurance, capital markets, and private pensions. If you deploy an AI system for credit scoring at a non-banking financial institution, insurance underwriting, algorithmic trading, or automated investment advice, ASF is your regulator.

BNR (National Bank of Romania) covers AI systems used in banking and payment services. Credit scoring at banks, fraud detection systems, automated loan approval - these fall under BNR's oversight.

Why the financial sector gets special treatment

The AI Act classifies several financial use cases as high-risk (Annex III, point 5b): AI systems used for creditworthiness assessment and credit scoring of natural persons. Financial regulators already have deep expertise in risk management, model validation, and supervisory examination. It makes sense to let ASF and BNR handle AI compliance in their domains rather than asking ANCOM to develop financial sector expertise from scratch.

What financial institutions should expect

ASF and BNR will likely integrate AI Act requirements into their existing supervisory frameworks. If you are a bank or insurance company, expect AI compliance to become part of your regular supervisory examinations. The regulators will probably issue sector-specific guidance on how to meet AI Act requirements in financial contexts - similar to what the EBA (European Banking Authority) has already started doing at the EU level.

ANSPDCP: Biometric, Law Enforcement, and Sensitive AI

ANSPDCP - Romania's data protection authority and GDPR enforcer - takes on some of the most sensitive AI categories under the regulation.

ANSPDCP's AI Act domains

Biometric identification and categorization. Real-time and post remote biometric identification systems, emotion recognition systems, biometric categorization. These are some of the most restricted AI applications under the regulation, with several falling into the "prohibited" category entirely.

Law enforcement AI. AI systems used by police and judicial authorities for risk assessment, polygraphing, evidence evaluation, crime prediction, and profiling. Romania's law enforcement agencies deploying AI tools will answer to ANSPDCP for compliance.

Migration and border control. AI systems used for asylum applications, border surveillance, and migration risk assessment.

Justice and democratic processes. AI systems used in court proceedings, legal research and interpretation, and AI used in democratic processes including election-related applications.

Why ANSPDCP makes sense for these domains

ANSPDCP already enforces GDPR, which heavily intersects with biometric data processing, law enforcement data, and fundamental rights protections. The authority has experience with data protection impact assessments (DPIAs), which share significant methodology with the fundamental rights impact assessments (FRIAs) required under the AI Act for high-risk systems. Read our guide on fundamental rights impact assessments for details on FRIA requirements.

The overlap between GDPR and the AI Act is significant in these domains. Biometric processing already requires GDPR Article 9 safeguards. Law enforcement data processing is governed by the Law Enforcement Directive (LED). ANSPDCP enforces both, so adding AI Act oversight is a logical extension.

ADR: The Notification Authority

The Agency for Digital Romania (ADR) has a specific, limited role: it acts as the notification authority for conformity assessment bodies.

What this means

Under the AI Act, high-risk AI systems that are not covered by existing EU harmonization legislation need to undergo conformity assessment by third-party bodies. ADR will be responsible for:

  • Evaluating and designating these conformity assessment bodies (called "notified bodies")
  • Monitoring that notified bodies maintain their competence and independence
  • Notifying the European Commission of designated bodies

If you are a conformity assessment organization looking to become a notified body for AI Act purposes in Romania, ADR is your point of contact.

For most AI providers and deployers, ADR's role is indirect. You will interact with the notified bodies themselves, not with ADR.

ASRO and CT 401: Standardization

ASRO (Romanian Standardization Association) hosts CT 401, the technical committee working on AI standardization. CT 401 mirrors the work of ISO/IEC JTC 1/SC 42 (Artificial Intelligence) at the international level and CEN-CENELEC JTC 21 at the European level.

Why standardization matters for compliance

The AI Act relies heavily on harmonized European standards. When CEN-CENELEC publishes harmonized standards for AI (expected throughout 2026-2027), compliance with those standards will create a "presumption of conformity" with the regulation. In plain terms: if you follow the standard, you are presumed to comply with the legal requirements.

CT 401 ensures Romania participates in developing these standards and that Romanian organizations have access to them through ASRO. If you want to track which standards are coming and how they will affect your compliance obligations, CT 401's work program is worth following.

What to Expect from Inspections

The AI Act gives market surveillance authorities significant powers. Here is what an ANCOM (or sector-specific) inspection might look like:

Document requests. Authorities can request access to your technical documentation, conformity assessments, quality management system records, training data documentation, and post-market monitoring logs. If you do not have these ready, you have a problem.

System access. Inspectors can request access to the AI system itself, including source code "where necessary to assess conformity." This is not a blanket right to your IP - it is limited to what is needed for enforcement. But it exists.

Testing. Authorities can test AI systems in real or simulated conditions to verify compliance claims. If your conformity declaration says your system has a certain accuracy level or bias threshold, they can verify that.

Corrective measures. Non-compliant systems can be ordered off the market, required to be modified, or restricted in their use. Authorities can also issue public warnings.

Fines. The AI Act sets maximum fines at 35 million EUR or 7% of global turnover (for prohibited practices), 15 million EUR or 3% (for other violations), and 7.5 million EUR or 1% (for providing incorrect information to authorities). Romania's implementing legislation will set the specific fine ranges within these EU maximums.

How to Prepare Right Now

Do not wait for the first inspections to start. The prohibited practices provisions are already in force (since February 2, 2025). High-risk requirements apply from August 2, 2026. Here is what to do:

1. Map your AI systems

Inventory every AI system you develop, deploy, or distribute. Classify each one under the AI Act risk categories. If you are not sure how, start with our EU AI Act compliance guide.

2. Check if you run prohibited AI

Review the Article 5 prohibited practices list against your current AI deployments. Social scoring, manipulative AI, certain biometric applications - if any of these match what you are running, you need to stop immediately. The prohibition is already active.

3. Prepare documentation for high-risk systems

For any system classified as high-risk, you need: technical documentation (Article 11), risk management system (Article 9), data governance measures (Article 10), human oversight provisions (Article 14), accuracy/robustness/cybersecurity documentation (Article 15), and quality management system (Article 17).

4. Know your regulator

Based on the designations above, identify which authority oversees your AI systems. Financial sector? ASF or BNR. Biometric or law enforcement? ANSPDCP. Everything else? ANCOM. Knowing who to expect - and who to contact with questions - reduces uncertainty.

5. Start your fundamental rights impact assessment

If you deploy high-risk AI, Article 27 requires a fundamental rights impact assessment before putting the system into use. Do not leave this for last. Our FRIA guide walks through the process.

6. Get expert help

The regulation is 144 pages of dense legal text, with technical standards still being finalized. If you are uncertain about your classification or compliance status, talk to specialists who understand both the legal and technical requirements.

Romania vs Other EU Member States

Romania's approach - distributing AI oversight across existing sector regulators - aligns with most member states:

France designated CNIL (data protection) and DGCCRF (consumer protection/market surveillance), with sector regulators for financial and health AI.

Germany distributed authority across its federal structure, with BaFin (financial supervision) handling financial AI and state-level data protection authorities handling their domains.

Italy created AgID (Agency for Digital Italy) as the national coordination point, with AGCOM (communications), Banca d'Italia, CONSOB (securities), and the Garante Privacy splitting sector oversight.

Spain established AESIA (Spanish AI Supervisory Agency) as a dedicated new agency - one of the few member states to create a purpose-built AI regulator.

Romania's model is closest to Italy's: existing communications/telecom regulator as lead, with sector regulators handling their specialties. The risk is fragmentation - with multiple authorities involved, coordination becomes critical. The memorandum addresses this by designating ANCOM as the coordination point, but how well this works in practice remains to be seen.

What Is Still Missing

The March 12 memorandum designates authorities, but several pieces are still needed:

National implementing legislation. The AI Act is an EU regulation (directly applicable), but Romania still needs national legislation to set specific fine ranges, enforcement procedures, and inter-authority coordination mechanisms. This law has not been published yet.

Authority staffing and budget. ANCOM, ANSPDCP, and ASF need technical staff capable of evaluating AI systems. Hiring AI expertise is competitive, and government salaries may not attract the talent needed. How quickly these authorities build capacity will determine how effective enforcement actually is.

Guidance and FAQ. None of the designated authorities have published AI Act guidance for Romanian organizations. Expect this in H2 2026, but do not wait for it to start your compliance work.

Regulatory sandbox. The AI Act encourages member states to establish AI regulatory sandboxes. Romania has not announced one yet. Several other member states (Spain, France, the Netherlands) already have sandboxes operational or in planning.

Frequently Asked Questions

When will ANCOM start inspecting AI systems?

The prohibited practices provisions are already enforceable (since February 2, 2025), but ANCOM is still building its AI enforcement capacity. High-risk system requirements become applicable on August 2, 2026. Expect ANCOM to begin market surveillance activities for high-risk AI in late 2026 or early 2027, likely starting with complaints-driven investigations rather than proactive inspections.

Do I need to register my AI system with any Romanian authority?

High-risk AI systems must be registered in the EU database (Article 49) before being placed on the market. This is an EU-level registration, not a national one. You register at the EU database managed by the European Commission, not with ANCOM or other Romanian authorities directly. The registration requirement applies from August 2, 2026.

What if my AI system falls under multiple authorities?

This will happen. A bank using biometric identification for customer onboarding could fall under both BNR (banking) and ANSPDCP (biometric). The memorandum designates ANCOM as the coordination point to resolve overlaps. In practice, the sector-specific regulator (BNR, ASF) will likely take the lead, with ANSPDCP consulted on biometric-specific requirements.

Can Romanian authorities access my AI source code?

Article 74(7) gives market surveillance authorities the right to access source code "where strictly necessary" to assess compliance with the regulation. This is not routine - it is a measure for cases where compliance cannot be verified through documentation and testing alone. Authorities must protect trade secrets and confidential information. In practice, expect documentation reviews and system testing long before any source code request.


Romania now has its AI Act enforcement framework in place. The authorities are designated. The deadlines are fixed. What remains is execution - both from the regulators building their capacity, and from organizations building their compliance programs.

If you need help mapping your AI systems, assessing risk classifications, or preparing documentation before enforcement begins, reach out to our EU AI Act compliance team. We work with Romanian and EU organizations to turn regulatory requirements into practical compliance programs.


Is your AI system compliant with the EU AI Act? Free risk assessment - find out in 2 minutes →

Need help with EU AI Act compliance or AI security?

Book a free 30-minute consultation. No commitment.

Book a Call

Weekly AI Security & Automation Digest

Get the latest on AI Security, workflow automation, secure integrations, and custom platform development delivered weekly.

No spam. Unsubscribe anytime.