eu-ai-act

8 of 27 EU Countries Can Actually Enforce the AI Act. Here Is What Happens in the Other 19.

Petru Constantin
6 min read
#eu-ai-act#compliance#enforcement#regulation

8 of 27 EU Countries Can Actually Enforce the AI Act. Here Is What Happens in the Other 19.

The Regulation Is Live. The Enforcers Are Not.

On August 2, 2025, every EU member state was supposed to designate the national authorities responsible for enforcing the AI Act. That deadline passed seven months ago. According to a European Parliament Think Tank report published March 18, 2026, only 8 of the 27 member states have designated their single points of contact.

That means 19 countries, including Romania, Hungary, Czechia, and Bulgaria, have no operational AI Act enforcement authority. No one to notify. No one to complain to. No one checking whether your AI system is compliant.

And yet, the AI Act is an EU regulation, not a directive. It applies directly in all member states without national transposition. The obligations are live whether your country has an authority or not.

Who Is Ready and Who Is Not

The countries that moved fastest chose different models, but they moved.

Finland became the first EU member state with full AI Act enforcement powers on January 1, 2026. They picked a decentralized approach: Traficom (Transport and Communications Agency) coordinates, while 10 existing sector regulators handle their domains. A new Sanctions Board imposes fines above EUR 100,000. Their regulatory sandbox is on track for August 2026.

Spain designated AESIA (Spanish Agency for the Supervision of Artificial Intelligence) as its central authority, with 21 supporting authorities and a functioning regulatory sandbox already hosting 12 AI providers.

Italy enacted a national AI law effective October 10, 2025, designating AgID as notification authority and the National Cybersecurity Agency (ACN) as market surveillance authority.

Germany is planning to use Bundesnetzagentur (Federal Network Agency) as its primary market surveillance authority, with BaFin covering financial services AI. But Germany's draft implementation law came after the August 2025 deadline, meaning it officially missed it.

France is building a decentralized model with DGCCRF as single point of contact and CNIL handling prohibited AI practices in workplaces and education.

Then there are countries like Romania, where a government memorandum published March 12, 2026 proposes ANCOM as the central market surveillance authority, but it is a memorandum, not legislation. No authority is operational. No sandbox exists. Romanian companies deploying AI have zero domestic guidance.

Why This Matters for Your Company

Here is the uncomfortable part: the lack of a national authority does not reduce your obligations.

The AI Act is directly applicable. If you operate a high-risk AI system in credit scoring, recruitment, law enforcement, or medical devices, the requirements in Articles 9 through 15 apply to you right now. Technical documentation, risk management systems, human oversight, accuracy monitoring, all of it.

The penalty framework under Article 99 is equally clear:

  • EUR 35 million or 7% of global turnover for prohibited AI practices (Article 5)
  • EUR 15 million or 3% of global turnover for high-risk system obligations
  • EUR 7.5 million or 1% of global turnover for providing false information

For SMEs and startups, the lower of the fixed amount or percentage applies. Still not a rounding error.

The European Commission is watching member states that missed the August 2025 deadline, and infringement proceedings are expected. When those authorities eventually get stood up, they will need to demonstrate results. Companies that ignored compliance during the gap period will be the easiest targets.

The Digital Omnibus Makes This Worse, Not Better

If you have been following the Digital Omnibus proposal, you know the European Parliament voted on March 26, 2026 to begin trilogue negotiations with the Council. The proposed delays would push the high-risk deadline to December 2, 2027 for standalone Annex III systems and August 2, 2028 for embedded Annex I systems.

Some companies read these headlines and hit pause on compliance work. That is a mistake.

The Omnibus is a proposal, not law. Trilogue negotiations just started. The current regulation, with its August 2, 2026 enforcement date for high-risk systems and GPAI obligations, remains legally binding until the Omnibus is formally adopted and published in the Official Journal. That process could take months after trilogue concludes.

And here is the part nobody mentions: the Omnibus changes deadlines, not requirements. Whether you need to comply by August 2026 or December 2027, the conformity assessment, the technical documentation, the risk management system, it is all the same work. Starting later just means rushing later.

What to Do If Your Country Has No Authority

If you are in one of the 19 member states without an operational AI Act authority, you might think you have breathing room. You do not. Here is what actually changes and what stays the same:

What changes: You have no domestic sandbox to test in. You have no local authority to ask questions. You have no national guidance documents.

What stays the same: Every obligation in the AI Act. The EU-level AI Office enforces GPAI provisions directly. The European AI Board coordinates cross-border enforcement. And when your national authority eventually starts operating, it will review what you did during the gap, not just what you are doing on day one.

Practical steps:

  1. Run an AI inventory. Map every AI system you build, deploy, or procure. You cannot classify risk if you do not know what you have.
  2. Classify against Annex III. The high-risk categories are defined. Credit scoring, recruitment tools, biometric systems, medical AI. Check if your systems fall in.
  3. Start documentation now. Article 11 requires technical documentation for high-risk systems. Article 13 requires record-keeping. This takes months, not days.
  4. Check your GPAI exposure. If you deploy models from OpenAI, Anthropic, Google, or Meta, you have downstream obligations under Article 53. Your provider's compliance does not cover yours.
  5. Use Finland as your benchmark. Their decentralized model with sector-specific authorities is the most complete implementation so far. Their guidance from Traficom offers practical reference points.

Where This Is Headed

The enforcement gap is temporary. It is not permanent. Finland, Spain, Italy, Germany, and France are already operational or close. The Commission is pushing lagging states. When all 27 authorities are live, the first wave of enforcement will target obvious violations: prohibited practices, missing documentation, undisclosed AI-generated content.

Companies that started early will have documentation, risk assessments, and audit trails ready. Companies that waited will scramble to produce months of retroactive compliance work under time pressure.

The AI Act does not care whether your government was ready. It cares whether you are.


About DeviDevs: We build ML platforms, secure AI systems, and help companies comply with the EU AI Act. devidevs.com

Need help with EU AI Act compliance or AI security?

Book a free 30-minute consultation. No commitment.

Book a Call

Weekly AI Security & Automation Digest

Get the latest on AI Security, workflow automation, secure integrations, and custom platform development delivered weekly.

No spam. Unsubscribe anytime.