Article 50 of the EU AI Act: What enterprises need to change before August 2, 2026

Bria ai

In just under three months, the transparency provisions of the EU AI Act take effect. On 2 August 2026, Article 50 begins to apply, and with it the legal expectation that AI-generated content carries machine-readable provenance, that synthetic media is disclosed to the people who view it, and that organizations can prove, on demand, where their AI-generated assets came from.

For most legal, compliance, and marketing teams, the obligations themselves are not new. The visibility is. Until now, AI provenance has been a procurement preference, a Trust & Safety conversation, an internal slide. After August 2, it is a regulatory floor. Non-compliance is fineable, deployment-specific, and applies whether your organization built the model or simply uses it.

This is a brief on what the regulation actually requires, how it changes enterprise vendor evaluation, and the operational adjustments your teams should be making in the runway.

What Article 50 obligates, in plain language

Article 50 of the AI Act creates three intertwined transparency obligations across the lifecycle of an AI system. They apply to two distinct roles: providers (organizations placing AI systems on the EU market, including via API) and deployers (organizations using AI systems in the course of business).

The obligations, in summary:

  • Providers of generative AI systems must ensure that AI-generated or AI-manipulated outputs are marked in a machine-readable format and detectable as artificial. The marking must be technically robust, interoperable, and survive routine processing.
  • Deployers of AI systems that produce deepfakes, meaning synthetic image, audio, or video content that resembles real people, objects, or events, must clearly disclose that the content is artificially generated or manipulated. A narrow carve-out exists for genuinely artistic, satirical, or fictional works, but the existence of synthetic content must still be acknowledged in an appropriate manner.
  • Deployers of AI systems that generate or manipulate text published to inform the public on matters of public interest must disclose that the text is AI-generated, unless the content has been subject to human editorial review and a person or organization holds editorial responsibility.

The European Commission published the second draft of the Code of Practice on Marking and Labelling of AI-Generated Content on 3 May 2026. A final code is expected ahead of August. The Code is voluntary, but adherence is presumed to demonstrate compliance, and the technical baseline it sets is now the working specification enterprise legal teams are designing against (Bird & Bird analysis; Herbert Smith Freehills Kramer analysis).

The penalty structure makes this a board-level conversation

Non-compliance with Article 50 falls under Article 99 of the AI Act and carries administrative fines of up to €15 million or 3% of total worldwide annual turnover, whichever is higher.

That places transparency violations between the highest tier (€35M / 7% for prohibited-AI infringements) and the lowest tier (€7.5M / 1% for procedural infractions). For a company with a billion euros in global turnover, the upper bound is €30M per violation. National regulators determine application case by case, with explicit instructions to weigh intent, repetition, and remediation.

The implication: transparency is no longer procedural housekeeping. It is a category of regulatory exposure that boards and audit committees will track.

What “machine-readable marking” actually requires

The Code of Practice acknowledges what most technical teams already suspect: a single watermark is not enough. The draft mandates a multi-layered approach to marking, combining:

  • Machine-readable provenance metadata embedded using open standards. The Coalition for Content Provenance and Authenticity (C2PA) is the most technically mature pathway and aligns directly with the Code’s specification.
  • An invisible, pixel-level watermark that survives common downstream processing - resizing, cropping, re-encoding, format conversion – and remains detectable by appropriate inspection tools.
  • A logging or fingerprinting mechanism that allows a generated asset to be traced back to its AI origin even when metadata or watermark is stripped.

For enterprises, this raises a sharper question: which of your active AI vendors actually meets that specification today, and which of them will meet it on 2 August? (See SoftwareSeni’s Article 50 compliance overview for a useful breakdown.)

How enterprise vendor evaluation has to evolve

Generative AI procurement has, until recently, been driven primarily by output quality, latency, and cost per asset. Article 50 introduces a different axis. Three questions now belong on every vendor evaluation:

  • Where did the training data come from, and can the vendor evidence its rights position? Models trained on scraped or undisclosed datasets cannot offer the provenance audit trail Article 50 implicitly requires. They also expose the deployer to copyright as well as compliance risk.
  • Does the system produce machine-readable provenance signals at the moment of generation, in a standard a downstream pipeline can preserve? Retrofitting metadata onto unmarked assets after the fact is brittle, and the Code of Practice is explicit that marking should be embedded by design.
  • Can the vendor provide a per-asset record of what was generated, when, by whom, and from what training influences? This is the audit trail compliance and legal teams will need when a regulator or auditor asks for evidence.

Internally, the same shift requires that AI tools be onboarded the same way other regulated systems already are. That means a documented vendor-risk assessment, a data-protection impact assessment where personal data is implicated, contractual indemnification language tied to training-data rights, and a record-keeping standard for the assets themselves. Marketing and creative teams used to procuring software through annual SaaS budgets will work alongside legal and procurement on a checklist that previously applied to financial systems, not creative ones.

A practical due-diligence checklist for visual AI procurement

Before August 2, every enterprise using or deploying generative visual AI in the EU should be able to answer the following, for every vendor, every model, and every output channel:

  • Is the model trained on licensed, attributable data, or on scraped content? What evidence does the vendor provide?
  • Does every generated asset carry C2PA-aligned provenance metadata at the moment of output?
  • Is there an invisible, pixel-level watermark that survives downstream processing, and is detection tooling available?
  • Is there a per-asset log that records the prompt, the model version, the parameters, and the training-data influences for that generation?
  • For deepfake-adjacent use cases (synthetic spokespeople, voice clones, faces of real people), is there a disclosure workflow built into the publishing pipeline?
  • Does the vendor offer contractual indemnification for IP claims tied to training data?
  • Is the vendor itself certified, SOC 2 Type II, ISO 27001, GDPR-aligned, C2PA-conformant, at the level your audit and security teams expect from any other regulated system?
  • What deployment options exist for EU data-residency requirements: cloud, BYOC, on-premises, on-device?

These questions are not theoretical. They are the operational form Article 50 will take when it lands on a procurement desk. Vendors that cannot answer them on the first pass will be filtered out, not because they lose a feature comparison, but because they introduce regulatory uncertainty into a category where regulators have just made the standard explicit.

How Bria’s Trust pillar maps to Article 50

Bria’s posture on transparency was set well before the Article 50 deadline existed.

The Trust pillar of Bria’s platform is built on several architectural commitments that map directly to the obligations enterprises now need to evidence.

Licensed training data. Bria’s foundation models are trained on 100% licensed content, sourced through partnerships with Getty Images, Alamy, Envato, Freepik, Depositphotos, and over 30 other rights-holders. There is no scraped data exposure, no LAION inheritance, no opaque dataset to defend in legal review. This is the precondition for every other piece of the trust stack.

Visual Birth Certificate. Every asset generated through Bria is associated with a provenance record that traces the output back to the licensed training data that influenced it. The Visual Birth Certificate is a periodic provenance report that documents what was generated under your token, which licensed sources contributed, and how creators were compensated. For audit, copyright registration, and regulator response, this is the per-asset evidence Article 50 implicitly requires.

C2PA Content Credentials by default. Bria has integrated C2PA content credentials across its image generation and editing endpoints. Outputs carry the cryptographic chain-of-custody assertion the EU Code of Practice describes as the most technically mature compliance pathway. C2PA is on by default, not a feature flag.

Attribution and creator compensation. Bria’s attribution engine routes a share of revenue back to the rights-holders whose data informed each generation. This is operational architecture, not a marketing position, and it is the layer that makes the licensed-data story durable rather than declarative.

Full indemnification and enterprise certifications. Bria provides full IP indemnification for outputs generated by its models, and is certified to SOC 2 Type II, ISO 27001, GDPR, and EU AI Act-aligned compliance standards. For enterprise procurement, this matters because it shifts a category of risk from the buyer’s balance sheet onto the vendor’s.

Deployment flexibility for EU data residency. Bria runs on Bria Cloud, BYOC, on-premises, and on-device. EU customers managing data-residency or sovereignty requirements have an architectural option, not a workaround.

The Trust pillar was not built in response to Article 50. The regulation now codifies the standard the pillar has been operating against. That is the simplest summary of where Bria sits in the new compliance landscape.

What to do this quarter

Three months is enough time to remediate, and not much more. Three concrete priorities for enterprise teams between now and August 2:

  • Audit your active visual-AI vendors against the procurement questions above. Identify the ones that cannot evidence training-data rights, machine-readable provenance, per-asset audit logging, or C2PA support. Decide whether to remediate or replace.
  • Document the disclosure workflow for any synthetic-content use case that touches the public. Campaigns featuring AI-generated spokespeople, voice clones, public-interest text, deepfake-adjacent creative. Disclosure is the deployer’s obligation, not the vendor’s.
  • Bring procurement, legal, and creative teams into the same conversation. Article 50 sits across the boundary between operations and brand. The teams that solve it cleanly are the ones that talk early.

The August 2 deadline is not the end of the work. It is the floor under it. Provenance, attribution, and disclosure are about to be commercial standards as much as legal ones, and the enterprises that treat them as architecture rather than compliance overhead will be the ones with the cleanest production pipelines on the other side.

FAQs

You do. Your model is trained on your work, under your license. Artfair routes commercial usage through it – ownership of the model and the rights to your underlying artwork stay with you.


Follow us on social media

Social media card
Social media logo
Social media card
Social media logo
Social media card
Social media logo