eu-ai-act

The EU Went After X and Meta for GPAI Compliance. Your Integration Is Next.

Petru Constantin
6 min read
#eu-ai-act#ai-security#gpai#compliance#devidevs

The EU Went After X and Meta for GPAI Compliance. Your Integration Is Next.

The Enforcement Machine Just Started

On January 8, 2026, the European Commission issued formal document retention orders to X for its Grok chatbot. By January 26, it escalated to full proceedings against X and xAI over deepfake proliferation. Meta is under investigation for systemic GPAI risks. These are the first real enforcement actions under the EU AI Act's General-Purpose AI provisions, and they landed months before the August 2, 2026 deadline when fines of up to EUR 15 million or 3% of global turnover become enforceable.

If you're a CTO running GPT-4 in production, or a startup using Claude for customer support, or an enterprise with Llama fine-tuned for internal workflows, this matters to you directly. Not because the EU is coming for your SaaS app tomorrow. But because the GPAI obligations that apply to your providers also create downstream obligations that land squarely on you.

What GPAI Obligations Actually Mean for Deployers

The AI Act splits responsibilities across the value chain. Article 53 hits GPAI providers: they must maintain technical documentation, publish training data summaries, and respect EU copyright law. Article 55 adds systemic risk requirements for large models: adversarial testing, incident reporting, cybersecurity protections.

These obligations have been in force since August 2, 2025. Your AI provider is supposed to be complying already.

Here is where it gets real for deployers. Under the AI Act, GPAI providers must give you, the downstream deployer, sufficient information to understand the model's capabilities, limitations, and risks so that YOU can comply with YOUR own obligations. If you build a high-risk AI system on top of GPT or Claude, Article 26 gives you a separate list of requirements: human oversight, input data quality monitoring, log retention for at least six months, incident reporting, and informing users that they are interacting with AI.

The provider gives you documentation. You are responsible for everything that happens after you integrate their model into your product.

The Three Providers Your Compliance Depends On

The GPAI Code of Practice was finalized in July 2025. OpenAI, Anthropic, and Google are full signatories. That is good news if you use their models. Signing the code is a strong compliance signal and means they should be producing the documentation you need.

Meta refused to sign. Not a partial refusal, either. Meta explicitly rejected the Code of Practice and has restricted Llama 4 access for EU-based companies. xAI signed only the Safety and Security chapter, meaning transparency and copyright compliance must be demonstrated through "alternative adequate means."

What does this mean for you?

  • Using GPT-4 or Claude? Your provider signed the code. You should be receiving technical documentation. If you are not, request it. You need it for your own compliance file.
  • Using Llama (self-hosted)? You are likely acting as both provider and deployer. Meta's refusal to sign the code does not exempt your deployment from the regulation. You carry the full compliance burden.
  • Using Grok (via X/xAI)? Your provider is under active enforcement proceedings. The documentation quality you receive is uncertain at best.

What You Need to Do Before August 2, 2026

Stop treating GPAI compliance as "someone else's problem." Your provider handles Article 53. You handle everything downstream. Here is a concrete checklist:

1. Document your GPAI supply chain. Which models do you use? Through which APIs? Who is the provider, and are they an AI Act signatory? If you fine-tuned or modified a model, you may have become a provider yourself under the Act's "substantial modification" rules.

2. Request provider documentation. Article 53 requires GPAI providers to supply downstream deployers with technical docs. If your provider has not sent you anything about capabilities, limitations, intended uses, and risk factors, ask for it. Put the request in writing.

3. Check if your use case is high-risk. If you deploy GPT or Claude in HR screening, credit scoring, education assessment, law enforcement support, or any Annex III category, you trigger Article 26 deployer obligations: human oversight, monitoring, log retention, fundamental rights impact assessment.

4. Set up logging and monitoring. Six months of log retention is the legal minimum for high-risk deployments. If you are calling an API and not storing the inputs, outputs, and metadata, start now.

5. Prepare incident reporting processes. If your AI system causes harm, you must report it to the provider AND the relevant national authority. Do you have a process for this? Most companies don't.

How DeviDevs Approaches This

We have been auditing GPAI deployments since before enforcement started. The pattern is always the same: the engineering team integrated the model, shipped the product, and nobody documented the compliance chain. When we ask "what documentation did your GPAI provider give you?", the answer is usually a blank stare.

The fix is not complicated. It is a structured assessment: map your GPAI dependencies, classify your use cases against Annex III, collect provider documentation, and build the compliance file before the regulator asks for it. If you have been putting this off, less than five months is tight but doable.

Where This Is Headed

The X and Meta enforcement actions are test cases. The AI Office is establishing precedent before the August 2, 2026 penalty deadline. When fines become enforceable, the enforcement pattern will move downstream. From providers to deployers who built products on those models without documentation.

The companies that will have the worst time are the ones using open-source GPAI models (especially Llama) without realizing they absorbed the provider's compliance obligations when they self-hosted. Only 8 of 27 member states have designated AI Act contact points as of March 2026. Other member states are still setting up theirs. When enforcement scales, "we thought our provider handled compliance" will not be a valid defense.

Your provider's GPAI compliance is their problem. Your deployment is yours.


About DeviDevs: We build ML platforms, secure AI systems, and help companies comply with the EU AI Act. devidevs.com

Weekly AI Security & Automation Digest

Get the latest on AI Security, workflow automation, secure integrations, and custom platform development delivered weekly.

No spam. Unsubscribe anytime.