EU AI Act High-Risk Classification: The Commission Missed Its Deadline. Will You Miss Yours?
The People Writing the Rules Cannot Meet Their Own Deadlines
The European Commission was supposed to publish practical guidelines for classifying high-risk AI systems by February 2, 2026. Article 96 of the AI Act required it. The guidelines were supposed to include concrete examples of what counts as high-risk and what does not, plus a post-market monitoring plan for providers.
They missed the deadline. As of mid-March 2026, the guidelines still have not been published. The Commission said it is "integrating feedback" and plans to release a draft "for more feedback." More feedback on the feedback.
Meanwhile, your compliance deadline is August 2, 2026. Less than five months away. The Commission gets to be late. You do not.
Why Classification Is the Hardest Part
Most companies think EU AI Act compliance starts with documentation. It does not. It starts with one question: is your AI system high-risk?
Get that wrong and everything downstream is wrong. Either you over-comply (burning money on documentation for a system that is not high-risk) or you under-comply (running an unregistered high-risk system with fines up to 3% of global annual turnover under Article 99).
Here is why the classification is harder than it sounds.
Article 6 has two paths to high-risk. The first path (Article 6(1)) covers AI systems that are safety components of products already regulated by EU legislation listed in Annex I, like medical devices, machinery, and vehicles. If your AI is embedded in a regulated product and the product requires a third-party conformity assessment, your AI is high-risk. Clear enough.
The second path (Article 6(2)) is where it gets messy. This covers standalone AI systems in the eight Annex III categories: biometrics, critical infrastructure, education, employment, essential services access, law enforcement, migration, and administration of justice. If your AI system falls into one of these categories, it is high-risk unless you can claim the Article 6(3) exception.
The Article 6(3) exception is where companies get stuck. An Annex III system is NOT high-risk if it does not pose "a significant risk of harm" AND meets one of four conditions: it performs a narrow procedural task, it improves a previously completed human activity, it detects patterns without replacing human assessment, or it performs a preparatory task for a human decision.
But here is the catch: the exception never applies if the AI system profiles natural persons. And you must document your reasoning and register the system regardless. If a national authority disagrees with your self-assessment, you are liable.
Without the Commission's examples, companies are making these judgment calls blind. Is your AI-powered CV screener a "narrow procedural task" because it just sorts resumes alphabetically? Probably not. Is your credit scoring model doing "preparatory work" for a human decision if the human approves 98% of the model's recommendations without review? Almost certainly not.
The Standards Are Also Late
Classification is only step one. Once you know your system is high-risk, you need to comply with Articles 8 through 15: risk management, data governance, technical documentation, record-keeping, transparency, human oversight, accuracy, robustness, and cybersecurity.
The compliance path the AI Act envisions relies on harmonized standards developed by CEN/CENELEC. These standards were originally due by April 2025. The Commission revised the deadline to August 2025. CEN/CENELEC missed that too. In October 2025, both technical boards adopted emergency measures to accelerate delivery, targeting Q4 2026 for the first standards.
Q4 2026. That is after the August 2, 2026 compliance deadline.
Without harmonized standards, companies must use the Annex VII conformity assessment procedure directly. This means interpreting the regulation text yourself, building your own compliance framework, and hoping a notified body (which barely exist yet for AI) agrees with your interpretation.
How to Handle High-Risk Classification Without Commission Guidelines
Waiting for Brussels to sort itself out is a strategy. A bad one. Here is what works instead.
Step 1: Build your AI inventory. You cannot classify what you have not catalogued. List every AI system your organization develops, deploys, or uses. Include vendor-provided AI tools. Over half of organizations lack systematic AI inventories according to compliance surveys. If you do not know what AI you are running, you cannot assess its risk.
Step 2: Map each system against Annex III categories. For each AI system, ask: does it operate in biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, or justice? If yes, it is a candidate for high-risk classification. If no, it is likely minimal or limited risk (but check Article 50 transparency obligations too, those are also due August 2, 2026).
Step 3: Apply the Article 6(3) exception test honestly. For each Annex III candidate, evaluate the four exception conditions. Document your reasoning. Be conservative. If you are arguing that your AI system is "just a preparatory task" but humans rubber-stamp its output, that argument will not hold. If the system profiles people, the exception does not apply at all.
Step 4: Start technical documentation now. For systems you classify as high-risk, Articles 8 through 15 require extensive documentation. Risk management system (Article 9), data governance practices (Article 10), technical specs for the system (Article 11), logging capabilities (Article 12), user instructions (Article 13), human oversight measures (Article 14), and accuracy/robustness/cybersecurity requirements (Article 15). This is months of work, not weeks.
Step 5: Do not wait for harmonized standards. Use existing frameworks as scaffolding. ISO/IEC 42001 (AI management systems) maps well to the AI Act's quality management requirements. The NIST AI Risk Management Framework covers risk assessment. The AI Act's own Article text plus recitals provide enough specificity to build a working compliance framework. You can align with harmonized standards when they eventually arrive.
The Omnibus Will Not Save You
You may have heard that the EU is delaying the high-risk deadline. The European Parliament's IMCO/LIBE committees adopted a negotiating mandate on March 18, 2026, and the Council adopted its position on March 13. Both propose pushing standalone high-risk compliance to December 2, 2027.
But this is a proposal, not a law. Trilogue negotiations between Parliament, Council, and Commission have not started. The fastest realistic adoption is late 2026 or early 2027. Until the Digital Omnibus passes and enters into force, August 2, 2026 remains the legally binding deadline.
Betting your compliance timeline on politicians finishing on time is not a strategy anyone with enterprise risk management experience would recommend. Even if the deadline moves, the requirements remain identical. Every day you spend on classification and documentation now is a day you do not spend scrambling later.
How DeviDevs Approaches This
We run AI system classification workshops with engineering and legal teams. About four hours per system gives you a clear high-risk or not-high-risk determination, documented to the standard the regulation requires. We use Annex III criteria and Article 6(3) exception conditions directly, without waiting for Commission guidelines that may never match the specificity your system needs.
If you have been stalling because the Commission missed its own deadline, that is understandable. But the regulation does not stall with them.
About DeviDevs: We build ML platforms, secure AI systems, and help companies comply with the EU AI Act. devidevs.com