AI Act for HR: What Romanian Employers Must Do Before August 2026
If your company uses AI anywhere in recruitment or people management, you are probably running a high-risk AI system under EU law. Not "maybe." Not "depends on interpretation." The EU AI Act explicitly lists employment, worker management, and access to self-employment as a high-risk domain. Full stop.
This is not a theoretical problem. Romanian employers are adopting AI-driven HR tools at speed - CV screening platforms, chatbot-based interviews, automated performance scoring, video analysis for candidate assessment. Most of these tools, as deployed today, would fail an EU AI Act compliance audit.
The deadline for high-risk AI system compliance is August 2, 2026. That is less than 18 months away. Here is what the law actually requires and what you need to do about it.
Which HR tools are classified as high-risk?
Article 6 of the EU AI Act, combined with Annex III point 4, defines the high-risk category for employment. The regulation covers AI systems intended to be used for:
Recruitment and selection:
- CV screening and filtering tools (including keyword-based and ML-ranked systems)
- Chatbot interviews that score or rank candidates
- Video interview analysis (facial expression, tone, micro-expression detection)
- Automated candidate sourcing that ranks or filters applicants
- Psychometric or personality assessment tools powered by AI
Workforce management:
- Performance scoring and rating systems
- Promotion and task allocation algorithms
- Monitoring tools that profile employee behavior patterns
- Workload distribution systems using predictive models
- Automated termination or disciplinary recommendation systems
Access to self-employment and contracts:
- Platform algorithms that determine access to gig work
- AI systems that assess freelancer eligibility or ranking
The key word in the regulation is "intended to be used." If the vendor designed it for HR decisions - or if you deploy a general-purpose AI tool specifically for HR decisions - it qualifies. Feeding CVs into ChatGPT and using the output to shortlist candidates? That is a high-risk deployment.
The 70% problem for Romanian employers
A 2025 survey by PwC Romania found that over 70% of medium and large Romanian companies have adopted at least one AI-powered HR tool. The most common: CV screening (used by 45% of surveyed companies), followed by chatbot-based initial interviews (28%) and performance analytics dashboards with predictive features (22%).
Most of these deployments happened without any risk assessment. The tools were purchased as SaaS products, integrated by HR teams, and never evaluated for bias, documentation, or regulatory compliance.
As JURIDICE.ro highlighted in their analysis "AI in resurse umane," the legal exposure is significant. But the legal angle is only half the problem. The technical requirements - bias testing, monitoring pipelines, documentation systems - are where most companies will struggle.
What the law actually requires
High-risk AI systems under the EU AI Act must meet requirements across six areas. Here is what each means in practical HR terms.
1. Risk management system (Article 9)
You need a documented, living risk management process - not a one-time assessment. For HR AI, this means:
- Identify risks: What happens when the CV screener has a false negative? What demographic groups might be disproportionately filtered out? What if the chatbot interview scores penalize non-native speakers?
- Estimate and evaluate: Quantify the risks. Run the tool against diverse candidate pools. Measure disparate impact rates.
- Mitigate: Implement controls. Set minimum thresholds for human review. Create escalation paths for edge cases.
- Monitor: This is ongoing. Not a one-time report filed in a drawer.
The risk management system must be updated throughout the AI system's lifecycle. Every time the vendor pushes a model update, your risk assessment needs revisiting.
2. Data governance (Article 10)
Training, validation, and testing datasets must meet quality criteria. For HR tools, this means:
- Relevance: Is the training data representative of the Romanian labor market? A model trained on US resume patterns may not work for Romanian CVs.
- Completeness: Does the dataset cover all demographic groups present in your applicant pool?
- Bias examination: Have you tested for disparate impact across protected characteristics - gender, age, ethnicity, disability?
- Data documentation: You need to know what data the AI was trained on. If the vendor cannot tell you, that is a red flag.
Most SaaS HR tools use proprietary training data. Under the AI Act, deployers (that is you, the employer) must ensure data quality. If your vendor cannot provide data governance documentation, you have a compliance gap.
3. Technical documentation (Article 11)
This is the part that catches most companies off guard. You need comprehensive technical documentation covering:
Required documentation:
- General description of the AI system
- Detailed description of elements and development process
- Monitoring, functioning, and control mechanisms
- Risk management information
- Description of changes throughout the lifecycle
- Performance metrics and benchmarks
- Data governance measures taken
- Foreseeable misuse scenarios and preventive measures
- Human oversight measures
- Expected system lifetime and maintenance scheduleFor a bought SaaS product, much of this should come from the vendor. But the deployer documentation - how you use it, what decisions it influences, what oversight you have - is entirely your responsibility.
4. Record-keeping and logging (Article 12)
High-risk AI systems must automatically log events during operation. For HR tools, relevant logs include:
- Every candidate screening decision (accepted, rejected, scored)
- The input data used for each decision
- The model version active at the time of the decision
- Any human override of the AI recommendation
- System performance metrics over time
These logs must be retained for a period appropriate to the AI system's intended purpose. For recruitment decisions, consider aligning with GDPR data retention policies (typically 6-12 months for unsuccessful candidates in Romania).
5. Transparency and information to deployers (Article 13)
The AI system must be designed to be sufficiently transparent. In practice:
- Candidates must be informed that AI is being used in the selection process
- The system must provide explanations for its outputs (why was a candidate ranked lower?)
- Deployers must understand the system's capabilities and limitations
- Instructions for use must cover intended purpose, performance levels, and known risks
Under Romanian labor law (Codul Muncii) and GDPR Article 22, automated decision-making in employment already requires transparency. The AI Act adds a technical layer on top: the system itself must be designed for interpretability, not just disclosed in a privacy notice.
6. Human oversight (Article 14)
This is the most operationally demanding requirement. High-risk AI systems must be designed so that humans can:
- Understand the system's outputs and interpret them correctly
- Override or reverse the AI's decision
- Interrupt the system (the "stop button" requirement)
- Monitor the system during operation
For HR, this means you cannot have a fully automated pipeline where CVs go in and rejection emails go out with no human in the loop. Someone qualified must review AI recommendations before they become decisions.
"Qualified" is the operative word. The person overseeing the AI must understand how it works, what its limitations are, and when to override it. Rubber-stamping AI outputs is not human oversight - it is automation bias, and the AI Act explicitly addresses this in Recital 73.
Bias monitoring: the technical challenge
Bias monitoring in HR AI is not optional. Article 10 requires bias examination of training data, and Article 9 requires ongoing risk monitoring. Together, they create a continuous bias monitoring obligation.
Here is what a practical bias monitoring framework looks like for an HR AI system:
Metrics to track
Disparate impact metrics:
- Selection rate ratio (4/5ths rule baseline)
- Score distribution by demographic group
- False positive/negative rates by group
- Intersectional analysis (e.g., gender x age)
Operational metrics:
- Override rate (how often humans change AI decisions)
- Appeal success rate
- Candidate feedback patterns
- Time-to-decision varianceTesting cadence
- Before deployment: Full bias audit on historical data
- Monthly: Automated disparate impact checks on live decisions
- Quarterly: Human review of edge cases and overrides
- Annually: Full re-audit with updated demographic data
What to do when you find bias
Finding bias is not the problem - every AI system has some degree of differential performance across groups. The question is what you do about it:
- Document it: Record the finding, the metric, and the affected group
- Assess severity: Is it above the 4/5ths rule threshold? Does it affect a protected characteristic?
- Root cause analysis: Is it the training data, the feature set, or the model architecture?
- Mitigate: Adjust thresholds, retrain with balanced data, add human review for affected groups
- Verify: Re-test to confirm the mitigation worked
- Report: Include in your conformity documentation
Practical implementation: a 12-month roadmap
With August 2026 as the deadline, here is a realistic timeline for a Romanian employer:
Months 1-3: Inventory and assessment (now - June 2026)
- Map every AI tool used in HR. Include SaaS platforms, internal scripts, and "AI features" buried in your HRIS
- Classify each tool against Annex III point 4. If it influences employment decisions, it is high-risk
- Audit vendor compliance: Request AI Act documentation from every vendor. Note gaps
- Identify quick wins: Some tools can be reconfigured to reduce risk (e.g., using AI for sourcing but not for screening)
Months 4-8: Documentation and controls (June - October 2026)
Wait - the deadline is August 2026. Why does this extend past it?
Because the regulation requires systems "placed on the market or put into service" after August 2, 2026 to comply. Systems already in use get a transition period, but new deployments must comply from day one. Start now so you are not scrambling.
- Build documentation: Technical docs, risk management records, data governance files
- Implement logging: Ensure every AI decision is logged with inputs, outputs, and model version
- Establish human oversight: Define review workflows, train HR staff on AI oversight
- Set up bias monitoring: Automated pipelines for disparate impact tracking
Months 9-12: Testing and refinement (October 2026 - January 2027)
- Run bias audits: Full testing across all protected characteristics
- Stress-test oversight: Can your HR team actually override the AI effectively? Do they understand the outputs?
- Incident response: What happens when the system produces a discriminatory outcome? Practice the response
- Vendor alignment: Ensure vendors are on track with their own compliance
The vendor problem
Most Romanian companies use international SaaS platforms for HR AI. The compliance burden splits between provider (vendor) and deployer (employer), but it splits unevenly.
Vendor responsibility:
- Conformity assessment
- Technical documentation of the AI system
- CE marking
- Quality management system
Your responsibility (deployer):
- Using the system according to instructions
- Human oversight
- Input data quality
- Monitoring and reporting
- Informing employees and candidates
- Data protection impact assessment (GDPR overlay)
The hard truth: many HR SaaS vendors are not ready. Some are US-based companies that view the AI Act as a European problem they will address later. If your vendor cannot provide Article 11 technical documentation by mid-2026, you need a contingency plan.
That contingency might be: switching to a compliant vendor, moving to a non-AI process temporarily, or building an internal compliance wrapper around the existing tool.
Penalties
The AI Act penalty structure is severe:
- Non-compliance with high-risk requirements: Up to 15 million EUR or 3% of worldwide annual turnover
- Providing incorrect information to authorities: Up to 7.5 million EUR or 1% of worldwide turnover
For Romanian SMEs, the regulation provides proportional penalties. But "proportional" still means significant fines relative to revenue.
Beyond fines, there is litigation risk. A rejected candidate who discovers they were filtered by a biased AI system has grounds for a discrimination claim under Romanian and EU law. The AI Act's documentation requirements create a paper trail that plaintiffs' lawyers will request.
FAQ
Do small companies need to comply?
Yes. The AI Act applies regardless of company size. There are some reduced documentation requirements for SMEs (Article 62), but the core obligations - risk management, human oversight, bias monitoring - apply to everyone using high-risk AI systems.
What if we just use ChatGPT to screen CVs?
General-purpose AI models (like GPT-4 or Claude) used for HR decisions fall under the high-risk classification. The fact that it is a general tool does not exempt you - what matters is the use case. If you use it to make or influence employment decisions, it is high-risk.
Does the AI Act overlap with GDPR?
Yes, significantly. GDPR Article 22 already regulates automated decision-making. The AI Act adds technical requirements on top. You need to comply with both. The good news: if you have a solid GDPR Article 22 framework for HR, you have a head start on AI Act compliance.
What about AI tools used only for scheduling or administrative tasks?
Administrative AI tools (meeting schedulers, leave management automation) are generally not high-risk unless they influence employment decisions. An AI that schedules shifts is administrative. An AI that allocates shifts based on performance scores is high-risk.
Can we just turn off AI features to avoid compliance?
Yes, that is a valid option. If the compliance cost exceeds the benefit of the AI tool, reverting to manual processes is legitimate. But be honest about whether you are actually turning it off - many "non-AI" HR tools have ML features running in the background.
Who enforces this in Romania?
Romania must designate a national competent authority by August 2, 2025. As of March 2026, the designation is pending. The likely candidates are ANSPDCP (data protection authority) or a new dedicated body. Regardless of who enforces it, the obligations are already defined in the regulation.
What to do right now
- Audit your HR tech stack for AI components. Every SaaS tool, every automation, every "smart" feature
- Request AI Act documentation from your vendors. Their response will tell you a lot about their readiness
- Assess your risk exposure - which tools influence actual employment decisions?
- Start building documentation - risk management, data governance, human oversight procedures
- Train your HR team on AI oversight responsibilities
If you need help assessing which of your HR tools qualify as high-risk and what compliance gaps exist, start with a risk assessment. We specialize in the technical side of AI Act compliance - not just what the law says, but what your systems need to do.
For a broader overview of the EU AI Act framework, see our comprehensive compliance guide. For technical implementation of compliant ML systems, read our guide on MLOps and EU AI Act compliance.