Compliance

NIS2 and AI Act: Double Compliance for Romanian Companies

Petru Constantin
14 min read
#nis2#eu-ai-act#compliance#romania#cybersecurity

NIS2 and AI Act: Double Compliance for Romanian Companies

Romania transposed the NIS2 Directive through OUG 155/2024, published in December 2024. Between 15,000 and 20,000 Romanian entities now fall under its scope. That number includes energy companies, banks, hospitals, telecom providers, cloud infrastructure operators, food distributors, and dozens of other sectors.

Here is the problem nobody is talking about: many of these same companies also deploy AI systems. Which means they need to comply with Regulation (EU) 2024/1689 - the EU AI Act - at the same time.

Two regulations. Two enforcement timelines. Two sets of penalties. Two different Romanian authorities watching.

Most compliance consultants treat these as separate projects. That is a mistake. The overlap between NIS2 and the AI Act is significant, and companies that build a unified framework will spend less, move faster, and reduce risk better than those running two parallel audit processes.

What NIS2 Actually Requires in Romania

The NIS2 Directive (Directive (EU) 2022/2555) replaced the original NIS Directive and dramatically expanded both scope and penalties. Romania implemented it through Ordonanta de Urgenta 155/2024.

Who Falls Under NIS2?

NIS2 categorizes entities into two groups:

Essential entities (entitati esentiale):

  • Energy (electricity, oil, gas, hydrogen, district heating)
  • Transport (air, rail, water, road)
  • Banking and financial market infrastructure
  • Healthcare (hospitals, labs, pharmaceutical manufacturers)
  • Drinking water and wastewater
  • Digital infrastructure (IXPs, DNS, TLD registries, cloud providers, data centers)
  • ICT service management (B2B)
  • Public administration
  • Space

Important entities (entitati importante):

  • Postal and courier services
  • Waste management
  • Chemical manufacturing and distribution
  • Food production and distribution
  • Manufacturing (medical devices, electronics, machinery, motor vehicles)
  • Digital providers (online marketplaces, search engines, social networks)
  • Research organizations

The size threshold: any medium or large enterprise in these sectors. That means 50+ employees or EUR 10M+ annual turnover. Some entities qualify regardless of size - DNS providers, TLD registries, and public electronic communications providers have no minimum threshold.

Core NIS2 Obligations

Under OUG 155/2024, covered entities must:

  1. Risk management measures - Implement technical, operational, and organizational measures proportionate to the risk
  2. Incident reporting - Report significant incidents to DNSC (Directoratul National de Securitate Cibernetica) within 24 hours (early warning), 72 hours (full notification), and 1 month (final report)
  3. Supply chain security - Assess and manage cybersecurity risks from suppliers and service providers
  4. Business continuity - Backup management, disaster recovery, crisis management
  5. Vulnerability handling - Policies for vulnerability disclosure and management
  6. Cybersecurity hygiene - Training, access control, asset management, multi-factor authentication
  7. Cryptography and encryption - Policies on the use of cryptographic solutions
  8. Human resource security - Background checks, access management, cybersecurity awareness
  9. Management accountability - Senior management must approve and oversee cybersecurity measures. Personal liability applies.
  10. Registration - Entities must register with DNSC and maintain updated contact information

NIS2 Penalties

For essential entities: up to EUR 10,000,000 or 2% of global annual turnover, whichever is higher.

For important entities: up to EUR 7,000,000 or 1.4% of global annual turnover, whichever is higher.

Management can be held personally liable. DNSC can temporarily suspend certifications or authorizations for non-compliant essential entities.

The AI Act Layer

Regulation (EU) 2024/1689 entered into force on 1 August 2024. Its provisions apply in phases:

  • February 2025: Prohibited AI practices ban takes effect
  • August 2025: General-purpose AI model obligations apply
  • August 2026: Full enforcement of high-risk AI system requirements
  • August 2027: Extended deadline for high-risk AI systems that are safety components of products

Who Needs AI Act Compliance?

Any organization that develops, deploys, imports, or distributes AI systems in the EU market. The key question is risk classification:

High-risk AI systems (Annex III) include AI used in:

  • Biometric identification and categorization
  • Critical infrastructure management and operation
  • Education and vocational training (admissions, assessments)
  • Employment (recruitment, task allocation, performance monitoring)
  • Essential private and public services (credit scoring, insurance, emergency dispatch)
  • Law enforcement
  • Migration, asylum, and border control
  • Administration of justice

High-risk AI systems (Annex I) include AI that is a safety component of products covered by EU harmonization legislation - medical devices, machinery, vehicles, aviation, rail, marine.

AI Act Penalties

The AI Act penalties are the steepest in EU regulatory history:

  • Prohibited AI practices: up to EUR 35,000,000 or 7% of global annual turnover
  • High-risk system violations: up to EUR 15,000,000 or 3% of global annual turnover
  • Providing incorrect information to authorities: up to EUR 7,500,000 or 1% of global annual turnover

ANCOM (Autoritatea Nationala pentru Administrare si Comunicatii) has been designated as the market surveillance authority for AI Act enforcement in Romania.

The Overlap Nobody Talks About

Here is where it gets interesting for Romanian companies. Consider a hospital that uses AI for patient triage. Under NIS2, it is an essential entity in the healthcare sector. Under the AI Act, its triage system is high-risk (essential private services - healthcare). Same company, same system, two regulatory frameworks.

Or take an energy company using AI for grid load prediction. NIS2 essential entity. AI Act high-risk system (critical infrastructure management). Double exposure.

The overlaps are structural, not accidental:

1. Risk Management

NIS2 requires risk analysis and information system security policies.

AI Act requires a risk management system for high-risk AI covering identification, estimation, evaluation, and treatment of risks.

The overlap: Both demand systematic risk assessment. Both require documentation. Both require regular review and updates. A company running two separate risk management processes for the same system is doing duplicate work.

2. Incident Reporting

NIS2 requires reporting significant cybersecurity incidents to DNSC within 24 hours.

AI Act requires providers of high-risk AI systems to report serious incidents to the market surveillance authority (ANCOM).

The overlap: An AI system security breach triggers obligations under both regulations simultaneously. Without a unified incident response plan, you risk missing deadlines or providing inconsistent information to two different authorities.

3. Supply Chain and Third-Party Management

NIS2 mandates supply chain security assessments and measures.

AI Act requires providers to implement quality management systems covering supply chain management and requires deployers to monitor AI systems obtained from third parties.

The overlap: Your AI vendor assessment should cover both cybersecurity posture (NIS2) and AI-specific risks (transparency, bias, robustness). One vendor assessment framework, two regulatory boxes checked.

4. Documentation and Record-Keeping

NIS2 requires documented security policies, procedures, and evidence of compliance.

AI Act requires technical documentation, conformity assessments, quality management systems, and logging/traceability.

The overlap: Document management, version control, audit trails. Building one documentation framework that satisfies both is far more efficient than maintaining parallel systems.

5. Human Oversight and Training

NIS2 requires cybersecurity awareness training for all staff and specific competency for security teams.

AI Act requires that high-risk AI systems allow effective human oversight and that deployers ensure personnel operating AI have sufficient AI literacy.

The overlap: Training programs can and should be integrated. Your cybersecurity team needs AI risk awareness. Your AI team needs cybersecurity awareness. Separate training programs create gaps.

6. Management Accountability

NIS2 Article 20: management bodies must approve cybersecurity risk management measures and undergo training.

AI Act Article 26: deployers must assign human oversight to individuals with competence, training, and authority.

The overlap: Board-level accountability for both. One governance structure should cover both regulatory domains.

The Unified Compliance Framework

Instead of running two separate compliance projects, Romanian companies should build a single framework with shared foundations and regulation-specific extensions.

Layer 1: Shared Foundation

These controls serve both NIS2 and AI Act compliance:

Governance:
  - Integrated risk management policy covering both cyber and AI risks
  - Single management accountability structure (board-level)
  - Unified compliance officer role or cross-functional team
  - Combined audit and review schedule
 
Risk Management:
  - Unified risk register (cyber threats + AI-specific risks)
  - Common risk assessment methodology
  - Shared risk treatment plans where systems overlap
  - Integrated third-party risk assessment
 
Documentation:
  - Single document management system
  - Shared audit trail infrastructure
  - Integrated change management process
  - Common evidence repository for both regulators
 
Training:
  - Combined security + AI literacy program
  - Role-specific modules (security staff get AI awareness, AI staff get cyber awareness)
  - Management training covering both frameworks
  - Regular drills and exercises
 
Incident Response:
  - Unified incident detection and classification
  - Dual-notification workflow (DNSC for cyber, ANCOM for AI)
  - Common post-incident analysis process
  - Shared lessons-learned repository
 
Supply Chain:
  - Single vendor assessment questionnaire covering both domains
  - Integrated contract requirements (security + AI transparency)
  - Common monitoring and review process

Layer 2: NIS2-Specific Controls

On top of the shared foundation, add NIS2-specific requirements:

  • Network and information system security measures (firewalls, IDS/IPS, SIEM)
  • 24/72-hour/1-month incident reporting cadence to DNSC
  • Business continuity and disaster recovery specific to IT infrastructure
  • Vulnerability management and patching policies
  • Cryptographic controls
  • Physical and environmental security
  • Registration and reporting obligations to DNSC

Layer 3: AI Act-Specific Controls

And AI Act-specific requirements:

  • AI system risk classification and inventory
  • High-risk AI conformity assessment
  • Technical documentation per Annex IV
  • AI system logging and monitoring requirements
  • Transparency obligations (informing users they interact with AI)
  • Data governance for training, validation, and testing datasets
  • Post-market monitoring plans
  • Registration in the EU database for high-risk AI systems
  • Fundamental rights impact assessment (for deployers of certain high-risk systems)

Implementation Roadmap

Phase 1: Assessment (Months 1-2)

  • Inventory all systems under NIS2 scope
  • Inventory all AI systems and classify by risk tier
  • Identify systems that fall under both regulations
  • Gap analysis against both regulatory requirements
  • Map existing controls to both frameworks

Phase 2: Foundation (Months 2-4)

  • Establish unified governance structure
  • Build integrated risk management framework
  • Deploy shared documentation and evidence management
  • Design unified incident response procedures
  • Develop combined training curriculum

Phase 3: NIS2 Controls (Months 3-5)

  • Implement technical cybersecurity measures
  • Configure incident detection and DNSC reporting workflows
  • Deploy business continuity and disaster recovery
  • Complete supply chain security assessments
  • Register with DNSC

Phase 4: AI Act Controls (Months 4-7)

  • Complete AI system conformity assessments
  • Build technical documentation per Annex IV
  • Implement AI monitoring and logging
  • Deploy transparency mechanisms
  • Register high-risk systems in EU database
  • Establish post-market monitoring

Phase 5: Integration and Testing (Months 6-8)

  • Run integrated tabletop exercises (combined cyber + AI incident)
  • Internal audit of unified framework
  • Management review
  • Remediate gaps
  • Prepare for external assessment

This timeline assumes a medium-sized enterprise with moderate complexity. Larger organizations with many AI systems and extensive NIS2 scope should plan for 10-14 months.

Romanian Regulatory Landscape

Understanding which authority handles what is critical:

| Aspect | NIS2 | AI Act | |--------|------|--------| | Romanian authority | DNSC | ANCOM | | EU basis | Directive 2022/2555 | Regulation 2024/1689 | | Romanian transposition | OUG 155/2024 | Direct effect (EU Regulation) | | Incident reporting to | DNSC | ANCOM | | Registration with | DNSC | EU Database (ANCOM oversight) | | Penalty cap | EUR 10M / 2% turnover | EUR 35M / 7% turnover | | Management liability | Yes | Yes (for deployers) |

DNSC operates under the Romanian Government and serves as the national CSIRT (Computer Security Incident Response Team). They handle the day-to-day of NIS2 enforcement, incident coordination, and vulnerability management.

ANCOM, traditionally the telecom regulator, has taken on AI Act market surveillance. This is a new role for them, and their enforcement apparatus is still developing. Early compliance demonstrates good faith and positions you favorably when enforcement begins in earnest.

One practical consideration: when an AI system security incident occurs, you may need to notify both DNSC (cybersecurity incident under NIS2) and ANCOM (serious AI incident under AI Act). Your incident response plan must account for parallel notifications with potentially different timelines and information requirements.

Common Mistakes Romanian Companies Make

Treating compliance as an IT project. Both NIS2 and AI Act require management accountability. This is a business project with IT components, not the other way around.

Waiting for enforcement to start. NIS2 obligations are already active. AI Act prohibited practices are already banned. High-risk requirements hit in August 2026. Starting now is not early - it is on time.

Hiring two separate consultants. One for NIS2, one for AI Act. They build two separate frameworks with duplicated controls, inconsistent documentation, and no integration. Double the cost, worse the result.

Ignoring supply chain obligations. Both regulations require you to manage third-party risk. If your AI vendor gets breached, you have exposure under both NIS2 (supply chain security failure) and AI Act (deployer monitoring obligation). One vendor assessment should cover both.

Copy-pasting policies from templates. Generic compliance templates from Western European consultancies do not account for Romanian regulatory specifics - OUG 155/2024 has national implementation choices that differ from other member states.

Who Needs to Act Now?

If your organization meets ALL of these criteria, you have double exposure:

  1. You are in a NIS2 sector (essential or important entity)
  2. You are medium-sized or larger (50+ employees or EUR 10M+ turnover)
  3. You develop, deploy, or use AI systems in your operations
  4. Those AI systems fall into a high-risk category under the AI Act

Conservative estimate: at least 2,000-3,000 Romanian entities have significant overlap between both regulations. These are primarily in healthcare, energy, financial services, transport, and digital infrastructure.

Even if your AI systems are not high-risk, you still have AI Act transparency obligations (for limited-risk systems like chatbots) on top of your full NIS2 compliance requirements. The unified framework approach still saves effort.

Frequently Asked Questions

Does NIS2 apply to companies that only use AI, not develop it?

Yes. NIS2 applies based on your sector and size, regardless of whether you develop or just deploy AI. If you are a hospital using an AI diagnostic tool, you are covered by NIS2 as a healthcare entity and by the AI Act as an AI deployer. Both apply fully.

Can we use ISO 27001 as a foundation for both NIS2 and AI Act compliance?

ISO 27001 covers a significant portion of NIS2 technical requirements - roughly 60-70% depending on your scope. For the AI Act, ISO 27001 helps with the information security aspects but does not cover AI-specific requirements like conformity assessment, transparency, data governance for training sets, or fundamental rights impact assessment. Use ISO 27001 as a base layer, then extend for both regulations.

What happens if we miss the NIS2 registration deadline with DNSC?

Under OUG 155/2024, entities must register with DNSC within the specified timeframe. Failure to register does not exempt you from the substantive requirements - it just means you are non-compliant from day one. DNSC can issue warnings, binding instructions, and administrative fines. For essential entities, they can also temporarily suspend authorizations.

Are the NIS2 and AI Act penalties cumulative?

Yes. A single incident involving an AI system security breach could trigger penalties under both NIS2 (for the cybersecurity failure) and the AI Act (for the AI system safety failure). In theory, a company could face up to EUR 10M under NIS2 and EUR 35M under the AI Act for the same underlying event, though proportionality principles would likely moderate the actual amounts. This is exactly why an integrated compliance approach matters - it reduces the probability of a dual-penalty scenario.

Next Steps

If your company falls under both NIS2 and the AI Act, the worst thing you can do is wait. The second worst thing is to run two separate compliance projects.

Start with an assessment. Understand which of your systems carry dual regulatory exposure. Map your existing controls against both frameworks. Identify the gaps. Build a unified plan.

We built our compliance services around this exact problem - helping Romanian and EU companies navigate overlapping regulations without duplicating effort. Our EU AI Act Compliance Guide covers the AI Act requirements in detail.

Want to know where you stand? Take our free EU AI Act Risk Assessment - it takes 10 minutes and gives you a clear picture of your AI systems' risk classification and compliance gaps. From there, we can help you build a unified NIS2 + AI Act framework that actually works.

Need help with EU AI Act compliance or AI security?

Book a free 30-minute consultation. No commitment.

Book a Call

Weekly AI Security & Automation Digest

Get the latest on AI Security, workflow automation, secure integrations, and custom platform development delivered weekly.

No spam. Unsubscribe anytime.