Your AI Hiring Tool Is About to Become a Regulated Product
Your Resume Screener Has a Legal Problem
Here's a number that should keep HR-tech vendors up at night: 87% of companies now use AI-driven tools in recruitment. Resume rankers, interview scorers, candidate matching algorithms. They're everywhere.
And from August 2, 2026, every single one of them becomes a regulated high-risk AI system under the EU AI Act, Annex III.
This isn't a proposal. It's not a guideline. It's law. Annual third-party bias audits, full technical documentation, human oversight mechanisms, and transparency disclosures to every candidate your system evaluates. Non-compliance? Up to EUR 15 million or 3% of global annual turnover, whichever is higher.
Most companies using AI in hiring haven't started preparing. They have less than four months.
What Exactly Is Regulated
The EU AI Act is specific about what counts as high-risk in employment. Annex III, Section 4 covers AI systems used for:
- Placing targeted job advertisements to specific individuals
- Analysing and filtering job applications, including ranking resumes
- Evaluating candidates in interviews, tests, or assessments
- Making decisions on promotion, termination, or task allocation based on individual behaviour or personal traits
- Monitoring and evaluating performance of workers
If your AI touches any of these, you're in scope. And it doesn't matter where your company is headquartered. Any organization whose AI evaluates EU-based candidates falls under these rules, even if you're based in the US or India posting remote roles.
The scope is broader than most companies realize. That "smart" feature in your ATS that sorts applications by relevance? High-risk. The AI interview platform your recruiter loves? High-risk. The performance monitoring tool that flags underperformers? Also high-risk.
The Lawsuits Are Already Here
If you think this is theoretical, look at what's happening in courts right now.
In March 2025, the ACLU filed EEOC charges against Intuit and HireVue on behalf of a deaf and Indigenous applicant. The AI interview tool told her to "practice active listening." The ACLU argues the system was inaccessible to deaf applicants and performed worse on non-white candidates with different speech patterns.
Then in May 2025, a US federal court certified a collective action in Mobley v. Workday that could include millions of applicants over 40 who were rejected by Workday's AI screening system. A class action. For age discrimination. By an algorithm.
And a 2025 study by researchers at the University of Hong Kong found that five leading LLMs systematically scored resumes of female candidates higher than males, and most awarded lower scores to Black male candidates compared to white males with identical qualifications.
These cases happened before the EU AI Act enforcement even kicks in. After August 2, 2026, the regulatory pressure multiplies.
What You Actually Need to Do
The requirements are concrete. This isn't "we'll figure it out later" territory.
1. Bias testing and documentation
Your AI hiring tools need systematic bias testing with results documented and mitigation measures in place. Not a one-time check. Annual third-party audits covering training data composition, outcome data disaggregated by protected characteristics (gender, age, ethnicity, disability), and evidence of continuous monitoring.
2. Training data governance
The data your model was trained on must be "relevant, representative, sufficiently diverse, and as free of errors as possible." If your resume screener was trained on historical hiring data from a company that predominantly hired white males, you have a data problem that needs fixing before August.
3. Human oversight
Humans must be able to override AI decisions. Not just theoretically. There needs to be a documented process where a human reviews and can reverse any AI-driven hiring decision. The AI recommends, the human decides. If your system auto-rejects candidates without human review, that's a compliance failure.
4. Transparency to candidates
Every candidate evaluated by your AI system has the right to know. What the AI does, how it works, how decisions are made. This isn't a privacy policy buried in your terms of service. It's an active disclosure obligation.
5. Conformity assessment
Before deploying (or continuing to deploy) a high-risk AI hiring system, you need a conformity assessment. This is the formal process that documents your system meets all EU AI Act requirements. Think of it as a technical audit that produces the paper trail regulators will ask for.
Most Companies Won't Be Ready
The European Commission was supposed to publish practical examples of high-risk vs. non-high-risk AI use cases by February 2, 2026. They missed their own deadline. Companies are left to interpret the rules on their own.
Meanwhile, certified auditors are already filling up. If you haven't signed an engagement letter with someone who can do your conformity assessment, you're competing with every other company that waited until the last quarter.
Some companies are hoping the Digital Omnibus proposal will weaken or delay these requirements. It might. It also might not. It's a proposal that has to pass through Parliament and Council. Building your compliance strategy on regulatory changes that may never happen is a gamble with EUR 15 million stakes.
How DeviDevs Approaches This
We've done AI system risk classifications for companies that didn't know which of their systems were high-risk until we walked through them together. Most discover two or three systems they hadn't considered.
The gap we see repeatedly: companies understand the regulation in theory but struggle with the technical implementation. The bias testing infrastructure, the Article 12 logging requirements, the human oversight mechanisms, the conformity assessment documentation. That's the work that actually takes time, and it's where most compliance platforms fall short because they generate checklists, not infrastructure.
If you're using AI in hiring and you're not sure where you stand, we built a free risk assessment quiz that takes two minutes. It won't replace a proper conformity assessment, but it will tell you whether you should be worried.
The Clock Is Running
The companies that moved early on GDPR in 2018 had a competitive advantage. The ones that scrambled after enforcement started paid consultants triple the rate and still got fined.
The EU AI Act is following the same pattern. Less than four months to August 2, 2026. Your AI hiring tool is about to become a regulated product. The question is whether you'll be ready when the auditors come, or whether you'll be the cautionary tale your competitors reference in their own compliance documentation.
About DeviDevs: We build ML platforms, secure AI systems, and help companies comply with the EU AI Act. devidevs.com