eu-ai-act

The EU Just Gave You 16 More Months for AI Compliance. Here's Why You Shouldn't Wait.

Petru Constantin
7 min read
#eu-ai-act#devidevs

The EU Just Gave You 16 More Months for AI Compliance. Here's Why You Shouldn't Wait.

The Delay Everyone Wanted

On March 13, 2026, the Council of the EU agreed its negotiating position on the Digital Omnibus proposal. The headline: high-risk AI obligations under Annex III could be pushed from August 2, 2026, to as late as December 2, 2027. Annex I systems might get until August 2028.

Sixteen extra months. That's what everyone in compliance circles has been asking for.

Here's the problem: this isn't law yet. The European Parliament's IMCO and LIBE committees are voting on their own version right now (scheduled for March 18, 2026). Then comes trilogue. Then comes final adoption. The Commission originally proposed the delay in November 2025. Four months later, it still hasn't been enacted.

If you're a CTO running AI systems in the EU and your compliance roadmap depends on this delay, you're betting your company on a legislative process that famously runs late.

The Real Reason for the Delay

Let's be honest about why the Commission proposed this extension. It's not because they think companies need more time to innovate. It's because the infrastructure isn't ready.

Harmonized standards aren't published. The first one, prEN 18286 for quality management systems, entered public inquiry in October 2025, roughly eight months behind the April 2025 target. Most Member States haven't designated their competent authorities. Notified bodies aren't designated in sufficient numbers to handle conformity assessments.

The Commission built a regulatory framework, then realized nobody built the plumbing. The Digital Omnibus is a plumber's delay, not a policy change.

This matters because the obligations themselves haven't changed. What you need to DO is exactly the same. The date you need to do it BY is what's shifting. And even that shift has a hard backstop: obligations kick in six months after the Commission confirms standards are ready, regardless of whether it's 2027 or not.

Meanwhile, Enforcement Has Already Started

While the legislative sausage-making continues, the AI Office is not waiting around.

January 8, 2026: the Commission ordered X to retain all Grok-related data and opened formal DSA infringement proceedings over non-consensual imagery and disinformation from "Spicy Mode." Potential fine: up to 6% of global turnover.

January 16, 2026: the Commission launched an ecosystem investigation into Meta's AI systems. Meta refused to sign the GPAI Code of Practice and is now under "closer scrutiny."

February 3, 2026: French prosecutors raided X's Paris offices as part of a criminal investigation into Grok's deepfake capabilities.

Prohibited AI practices have been enforceable since February 2, 2025. GPAI obligations since August 2, 2025. General-purpose AI providers are already subject to transparency requirements. The high-risk delay is about one slice of the Act, not all of it.

If your AI system does anything that touches the prohibitions or GPAI rules, you're already in scope. No delay saves you.

The Numbers That Should Worry You

According to the Cloud Security Alliance, over half of organizations lack systematic AI inventories. That's step one of compliance. You can't classify what you haven't cataloged.

About 40% of enterprise AI systems can't be cleanly mapped to the Act's risk tiers. So even companies that HAVE an inventory can't figure out which rules apply to them.

Notified bodies, the organizations authorized to perform third-party conformity assessments, are already booking into Q2 2026. A realistic compliance timeline runs 8-14 months from kickoff to conformity assessment. If you start today, you're looking at January to May 2027. If you wait for the delay to be confirmed, you're starting in H2 2026 at the earliest and finishing sometime in 2028, potentially after even the extended deadline.

The math is simple: 16 extra months of runway evaporates if you spend 6 of them waiting for confirmation and another 3 finding a notified body with availability.

What You Should Do With the Extra Time

Treat the delay as runway for better compliance, not as a reason to do nothing.

Month 1-2: AI System Inventory. Catalog every AI system in production and development. Include third-party AI tools your teams are using (yes, that ChatGPT wrapper your marketing team built counts). If you can't inventory your systems, nothing else matters.

Month 2-4: Risk Classification. Map each system against Annex III categories. The Commission missed its own February 2, 2026 deadline to publish detailed Article 6 guidance, so you'll need to work from the Annex directly. Get legal and technical people in the same room. Classification decisions require both.

Month 4-8: Documentation and Technical Controls. Build your quality management system. Implement data governance measures (Article 10). Set up risk management processes (Article 9). Start logging (Article 12). Most of this is genuinely useful engineering practice, regardless of regulation.

Month 8-12: Conformity Assessment Preparation. Prepare your technical documentation package. If you need a notified body, book one early. Self-assessment systems need internal audit processes that can withstand scrutiny.

Month 12-16: Testing and Remediation. Run your conformity assessment. Fix what fails. Document what you've done and why. Register in the EU database.

This timeline assumes one pass with no major surprises. Reality usually adds 30-50% buffer time. Which means starting now gives you exactly enough time, not too much.

The EDPB Just Made It Harder

One thing nobody's talking about enough: the EDPB-EDPS Joint Opinion 1/2026 warned that the Digital Omnibus risks weakening fundamental rights protections. The data protection authorities are pushing back on proposals to relax special category data processing requirements for AI.

In practice, this means AI Act compliance and GDPR compliance are on a collision course. If your AI system processes personal data (and most do), you need a DPIA that accounts for both frameworks. Doing them separately is a recipe for contradictions. Doing them together takes longer but produces something coherent.

The joint guidelines on GDPR-AI Act overlap are expected sometime in 2026. When they drop, every company that already has their AI inventory and risk classification done will be weeks ahead of everyone else.

First Movers Win Contracts

This isn't just about avoiding fines (though fines of up to 7% of global revenue or 35 million EUR tend to get board attention). Companies that demonstrate EU AI Act compliance early will have a competitive advantage in procurement.

Enterprise buyers are starting to include AI governance requirements in RFPs. Government contracts in the EU will require compliance evidence. Insurance carriers are introducing AI Security Riders that require documented adversarial red-teaming and risk assessments. No documentation means no coverage.

The delay is a gift, but only if you use it. Sixteen months of compliance runway is exactly what the doctor ordered for the 50% of organizations that haven't started. Sixteen months of "let's wait and see" is how you end up scrambling in Q3 2027 with no notified body availability and an incomplete inventory.

What We've Seen at DeviDevs

We've been doing AI system classification and compliance gap assessments for European companies since early 2025. The pattern is always the same: teams underestimate how many AI systems they have, overestimate how well-documented those systems are, and discover the hard part isn't the regulation itself, it's the internal coordination required to comply.

If your team is staring at this delay and wondering whether to start or wait, start. The compliance work you do now is useful regardless of when the deadline lands. Good documentation, proper risk management, and systematic monitoring aren't regulatory overhead. They're engineering hygiene.

If you're dealing with this and don't know where to begin, we've been there. A 2-hour classification workshop is usually enough to tell you how much work you're actually looking at.


About DeviDevs: We build ML platforms, secure AI systems, and help companies comply with the EU AI Act. devidevs.com

Weekly AI Security & Automation Digest

Get the latest on AI Security, workflow automation, secure integrations, and custom platform development delivered weekly.

No spam. Unsubscribe anytime.