eu-ai-act

Your AI Content Needs a Label by August. Here Is What Article 50 Actually Requires.

Petru Constantin
7 min read
#eu-ai-act#devidevs

Your AI Content Needs a Label by August. Here Is What Article 50 Actually Requires.

The Deadline Nobody Is Talking About

Everyone is arguing about whether the EU AI Act high-risk deadline got pushed back. Council voted March 13. Parliament committees voted March 18. The Digital Omnibus is making headlines.

But here is what the headlines miss: Article 50 transparency obligations are NOT part of the Omnibus delay. They apply from August 2, 2026. Full stop. No proposed extension. No trilogue to wait for.

If your company uses ChatGPT to draft blog posts, Midjourney for marketing visuals, Synthesia for training videos, or DALL-E for social media graphics, you have until August 2026 to figure out how to label that content. And right now, according to a March 2026 study analyzing 50 generative AI systems, only 38% implement any form of machine-readable marking. Only 18% have visible disclosures.

Most companies are not ready. Most companies do not even know this requirement exists.

What Article 50 Actually Says

Article 50 of the EU AI Act creates two sets of obligations: one for providers (the companies building AI tools), and one for deployers (the companies using them).

If You Build AI Systems That Generate Content

Providers of AI systems that generate synthetic audio, images, video, or text must ensure outputs are:

  1. Marked in a machine-readable format so downstream tools can detect the content was AI-generated
  2. Detectable as artificially generated or manipulated, using technical solutions that are effective, interoperable, robust, and reliable

The European Commission's draft Code of Practice (first draft December 2025, second draft mid-March 2026, final expected June 2026) spells out a "defense-in-depth" approach. Providers cannot rely on a single marking technique. The Code prescribes three layers:

  • Metadata embedding using standards like C2PA (Content Credentials)
  • Imperceptible watermarking that survives compression, cropping, and screenshots
  • Fingerprinting and logging as a fallback when other methods fail

For text specifically, the Code acknowledges that watermarking degrades quality. The alternative: "Provenance Certificates," which are digitally signed manifests linking text output back to the generating system.

If You Deploy AI Systems for Content Creation

Deployers, meaning the companies actually using these AI tools, have their own obligations:

Deepfakes: If you generate or manipulate image, audio, or video content that constitutes a deepfake, you must disclose it. For real-time video, that means a continuous non-intrusive icon. For static content, a permanently visible label.

AI-generated text on public interest topics: If you publish AI-generated text about matters of public interest, you must disclose it was AI-generated. This covers AI-written news articles, policy analysis, and public-facing reports.

The Code of Practice proposes a common "AI" icon (with localized variants, like "KI" in German-speaking countries) and a two-tier classification system: "fully AI-generated" versus "AI-assisted" content that altered meaning or accuracy.

One critical detail that catches companies off guard: "human review" does not automatically exempt you from disclosure. The Code requires documented human review workflows with identified responsible persons. Simply asserting "a human reviewed this" is not enough.

What Is Exempt

Artistic, creative, satirical, or fictional content gets lighter treatment, with "non-intrusive" disclosures instead of full labeling. But even that exemption does not apply if the content affects third-party rights like personality rights or privacy.

Why This Matters for Your Marketing Team

Here is the practical scenario most companies have not thought through:

Your marketing team uses Midjourney to create campaign visuals. A designer uses ChatGPT to draft ad copy. Someone generates a product demo video with Synthesia. The social media manager posts it all across LinkedIn, Instagram, and your company blog.

Under Article 50, every one of those outputs needs machine-readable marking, and the deepfake-adjacent video content needs visible disclosure. Your company is the deployer. It is your problem, not Midjourney's or OpenAI's.

The provider (Midjourney, OpenAI) is responsible for building the marking capability into their tools. But the deployer (your company) is responsible for making sure that marking survives your content pipeline and that visible disclosures appear where required.

And here is where it gets messy: the March 2026 research paper found that even when major providers like Stability AI implement watermarking in their own platforms, those protections do not automatically extend to API users. If your team uses APIs to generate content, the watermarking might not carry over.

Only 3 of the 40 non-EU provider systems examined in that study were EU-based. Enforcement against a provider in San Francisco is one thing. Enforcement against you, the EU-based deployer, is another.

What To Do Before August

Step 1: Audit Your AI Content Pipeline

List every tool in your organization that generates or manipulates content using AI. Not just the obvious ones (ChatGPT, Midjourney). Include:

  • AI-powered email tools that generate subject lines or body text
  • Design tools with AI fill, extend, or generate features
  • Video editing software with AI voice synthesis or face manipulation
  • CRM tools that auto-generate personalized content
  • Internal tools built on top of OpenAI, Anthropic, or open-source model APIs

Step 2: Check Your Providers' Marking Capabilities

For each tool, verify:

  • Does the provider embed C2PA Content Credentials or equivalent metadata?
  • Does the marking survive your content workflow (download, edit, re-export, upload)?
  • If you use the API directly, does the API output include marking?

If the answer to any of these is "no" or "I don't know," you have a gap.

Step 3: Build Disclosure Workflows

For deployer obligations, you need documented processes:

  • Which content types require visible "AI" disclosure labels?
  • Who reviews AI-generated content before publication, and how is that review documented?
  • Where do you store provenance records for AI-generated content?
  • How do you handle mixed content (human-written text with AI-generated images)?

Step 4: Train Your Teams

Your marketing, comms, and content teams need to understand when disclosure is required. "I used ChatGPT to clean up my writing" is different from "ChatGPT wrote this blog post from a prompt." The two-tier classification (fully AI-generated vs. AI-assisted) determines what level of disclosure applies.

The Penalty Math

Article 99 of the EU AI Act sets fines for transparency violations at up to 15 million EUR or 3% of global annual turnover, whichever is higher. For SMBs, there are proportionate lower caps, but the fines are still significant enough to matter.

More practically, enforcement actions by the European AI Office have already started. France is investigating X/Twitter over Grok deepfake generation without Article 50 labeling. The EU issued document retention orders to X for Grok in January 2026. If they are going after the biggest names first, the signal is clear: they intend to enforce.

How DeviDevs Approaches This

We run AI Content Transparency Audits for companies that use generative AI in their content pipelines. The process takes about a week: we map your AI tools, check provider marking capabilities, identify gaps in your disclosure workflows, and build a compliance playbook your marketing and legal teams can actually follow.

The companies that start now spend less per system because they are not scrambling in July. The ones that wait will be competing for compliance consultants the same month the Code of Practice finalizes.

If you are dealing with this, we have been there. We work with the regulation text and the Code of Practice drafts directly, no need to wait for harmonized standards that are running late anyway.

What This Means For Your Content Pipeline

The Digital Omnibus debate is a distraction for Article 50. Transparency obligations for AI-generated content are not delayed. August 2, 2026 is the date. The Code of Practice will finalize in June, giving you roughly two months to implement whatever the final version requires.

The companies that audit their AI content pipelines now will have a smooth transition. The ones reading this in July will wish they had started in March.

Which category will you be in?


About DeviDevs: We build ML platforms, secure AI systems, and help companies comply with the EU AI Act. devidevs.com

Weekly AI Security & Automation Digest

Get the latest on AI Security, workflow automation, secure integrations, and custom platform development delivered weekly.

No spam. Unsubscribe anytime.