Your Biometric AI Compliance Problem Is Three Laws Deep
A Romanian Company Got Warned Before Turning the System On
Earlier this year, Romania's data protection authority (ANSPDCP) issued a warning to ARRISE LIVE SRL for planning to deploy facial recognition for employee access control. Not for misusing the system. Not for a data breach. For planning to deploy it.
The company already had card-based access. ANSPDCP ruled that adding biometric scanning failed the necessity and proportionality tests under GDPR. The system never processed a single face. The warning came anyway.
This is what biometric AI compliance looks like in the EU right now. You don't get caught after something goes wrong. You get caught before you start.
Three Laws, One Camera
If you're deploying any AI system that processes biometric data in a workplace, you're not dealing with one regulation. You're dealing with three, and they overlap in ways that make compliance genuinely complicated.
GDPR Article 9: Biometric Data Is Special Category
Under GDPR Article 9, biometric data used to identify a person is classified as special category data. Processing it is prohibited by default, with narrow exceptions. In a workplace context, consent is nearly impossible to rely on because regulators across the EU consistently rule that the employer-employee power imbalance makes consent non-voluntary.
That leaves you with "substantial public interest" or explicit legal authorization, neither of which covers "we want faster badge scanning." The ARRISE LIVE case is a textbook example: card access already worked, so biometric access failed the necessity test before it even touched proportionality.
Mercadona, the Spanish supermarket chain, learned this the expensive way. They deployed facial recognition across 48 stores to identify banned individuals and caught a EUR 2.52 million fine from Spain's AEPD. The system scanned every customer who walked through the door, including children and employees. No valid legal basis under Article 9.
EU AI Act Article 5: Some Biometric AI Is Flat-Out Banned
Since February 2, 2025, the EU AI Act's prohibited practices are enforceable. Two of them hit workplace biometric AI directly:
Emotion recognition in the workplace (Article 5(1)(f)): AI systems that infer emotions from biometric signals in work or education settings are banned outright. The only exceptions are medical and safety contexts, like detecting driver fatigue. "Measuring employee engagement" or "monitoring stress levels" with facial analysis cameras? Prohibited. Fines: up to EUR 35 million or 7% of global annual turnover.
Biometric categorization by protected attributes (Article 5(1)(g)): AI that uses biometric data to deduce race, political opinions, trade union membership, religious beliefs, or sexual orientation is also prohibited. This catches more systems than people realize. If your access control vendor's algorithm has any categorization features beyond pure identity verification, you have a problem.
Annex III: Everything Else Is High-Risk
Workplace biometric AI that isn't outright prohibited is almost certainly classified as high-risk under Annex III. Two categories apply:
Category 1 (Biometrics): Remote biometric identification systems and biometric categorization systems are explicitly listed. Pure verification (confirming someone is who they claim to be) gets an exception, but anything beyond one-to-one matching falls in.
Category 4 (Employment): AI used for recruitment screening, performance evaluation, task allocation, promotion decisions, and workplace monitoring that informs management actions is high-risk regardless of whether it uses biometrics.
If your facial recognition system controls building access AND feeds data into attendance reports that managers use for performance reviews, you've hit both categories. From August 2, 2026, that means mandatory risk management systems, technical documentation, conformity assessments, human oversight mechanisms, and registration in the EU database. Before you deploy, not after.
What to Actually Do About It
If you have biometric AI systems in any EU workplace, here's where to start:
1. Map what you have. List every system that processes biometric data: facial recognition, fingerprint scanners with AI matching, voice recognition, gait analysis. Include vendor systems you didn't build yourself. If it runs on your premises and processes employee biometrics, it's your compliance responsibility.
2. Check the prohibition list first. Any emotion recognition in the workplace? Kill it. Any biometric categorization beyond identity verification? Kill it. These have been illegal since February 2025. You cannot retrofit compliance onto a prohibited system.
3. Run the necessity test. For every remaining system, ask: does a non-biometric alternative exist that achieves the same purpose? If yes, the ARRISE LIVE precedent tells you exactly where GDPR regulators will land. Cards, PINs, and tokens are all less intrusive. The burden of proof is on you to show biometrics are necessary, not just convenient.
4. Prepare for August 2026. Any biometric system that survives steps 1-3 is almost certainly Annex III high-risk. Start your conformity assessment now. You need a risk management system (Article 9), data governance documentation (Article 10), technical documentation (Article 11), human oversight design (Article 14), and accuracy/robustness testing (Article 15). This is months of work, not weeks.
5. Run a Fundamental Rights Impact Assessment. Article 27 requires deployers of high-risk AI in employment contexts to complete an FRIA before deployment and notify their national market surveillance authority. In Romania, that's ANCOM.
How DeviDevs Approaches Biometric AI Compliance
We've seen this pattern at multiple companies: someone buys an access control system with "AI-powered facial recognition" on the box, IT installs it, and nobody asks whether it's legal until a regulator knocks. By then you're explaining why your badge readers need to scan faces when the old cards worked fine.
The fix isn't complicated, but it requires someone who understands all three regulatory layers simultaneously. A GDPR-only assessment misses the AI Act prohibitions. An AI Act-only assessment misses GDPR's special category rules. Most compliance consultants know one framework. Biometric AI compliance requires knowing how they intersect.
Not sure whether your workplace AI systems are affected? Our EU AI Act risk assessment takes two minutes and tells you which of your systems fall under prohibited, high-risk, or limited-risk categories.
The Lesson From Romania's First Warning
ANSPDCP didn't wait for a breach. They didn't wait for employee complaints to pile up. They acted on the plan to deploy, not the deployment itself. That's the regulatory direction across the EU: proactive enforcement, not reactive.
If you're running facial recognition in a European workplace right now, or if your access control vendor is pitching you an "AI upgrade," pause. Check the prohibition list. Run the necessity test. And if the system survives both, prepare for the full Annex III compliance process before August.
The companies that figure this out now avoid the fines. The companies that wait become the case studies.
About DeviDevs: We build ML platforms, secure AI systems, and help companies comply with the EU AI Act. devidevs.com