EU AI Act high-risk obligations begin in X daysBrowse Free Tools EU AI Act deadline in X dayscheck readiness

Blog · March 2026 · 8 min read

SOFT LAWDraft 2 is voluntary guidance; binding obligations still come from Article 50 itself

Article 50 Code of Practice: Draft Status, Scope, and What It Does Not Cover

EU AI Act Article 50 transparency framework showing the distinction between the binding law and the voluntary Code of Practice for AI-generated content marking
Draft 2 matters, but it is not the whole law. Scope the legal obligation first.

Confirmed law vs draft code

Confirmed law Article 50 transparency obligations become applicable on 2 August 2026. The law is broader than the code and covers multiple transparency scenarios depending on the AI system and the use case.

Draft soft law As of 20 April 2026, the Commission-facilitated second draft published on 5 March 2026 remains a voluntary guidance track. It mainly supports compliance with Article 50(2) and Article 50(4) and does not replace the legal duties in Article 50 itself.

Process Timing remains proposal-stage and should be checked against official Commission or AI Office updates before being treated as fixed.

Too many organisations are collapsing two different things into one bucket: Article 50 law and the draft Code of Practice. That is a mistake. Article 50 is the binding legal text in the AI Act. The Code of Practice is a voluntary implementation aid being developed by the Commission and the AI Office for selected transparency obligations.

Practical takeaway: do not scope your transparency programme only around the Code. Scope it first around the binding Article 50 duties, then use the Code as a supporting operational reference where it actually applies.

What the draft Code actually covers

The official Commission page is unusually clear here. If approved, the final Code will serve as a voluntary tool for providers and deployers of generative AI systems to demonstrate compliance with their respective obligations under Article 50(2) and Article 50(4). The second draft was published on 5 March 2026, and the Commission says finalisation is expected around May-June / beginning of June 2026. In practice, that means the draft Code is centred on the marking and detection of AI-generated content and the labelling of deepfakes and certain AI-generated publications.

That is narrower than the full Article 50 landscape. Article 50 also contains requirements around AI interaction disclosures and certain other transparency situations. So if your teams are building chatbot flows, emotion-recognition use cases, or other transparency-dependent interfaces, this page should not be treated as the whole law.

What the current draft workstream changes

The current draft workstream is more implementation-oriented than earlier public discussion. The drafting process is framed around a revised two-layer marking approach built around secured metadata and watermarking, alongside labelling deepfakes and certain AI-generated publications. Treat these as soft-law implementation signals and best-practice support, not binding legal additions.

Infographic showing the practical split between binding Article 50 law and the draft Code of Practice focused on marking and labelling AI-generated content
Use the law to define scope. Use the draft Code to shape implementation where it is relevant.

What your team should do now

1. Separate legal scope from implementation guidance. Build an internal matrix that maps your AI use cases to the actual paragraphs of Article 50 before you map them to any draft code provision.

2. Inventory synthetic-content workflows. Marketing, knowledge-base publishing, customer communications, video/image generation, and public-facing text are the usual blind spots.

3. Test marking and provenance workflows now. The rules apply in August 2026. Waiting for the final Code to begin technical design is lazy planning.

4. Preserve drafting flexibility. Because the Code remains voluntary and in draft form, avoid hard-coding draft-specific assumptions into product logic. Put them in guidance layers, playbooks, or configurable policy notes instead.

What not to do

For a cleaner legal-operational split, also read our dedicated explainer on Article 50 Code vs Article 50 Law. If you need a practical first-pass assessment, use the Transparency Validator and the 12-question Compliance Checker.

About the author: Abhishek G Sharma is the founder of Move78 International Limited. He holds ISO 42001 Lead Auditor, CISA, CISM, CRISC, and CEH certifications. He brings over 20 years of practitioner experience in cybersecurity, AI governance, and enterprise risk management.

Disclaimer: This analysis is for educational purposes only and does not constitute legal advice. Consult qualified counsel for binding compliance decisions. Last updated: April 2026.

Need More Practical Guidance?

Explore the free EU AI Compass tools and guides to classify your use case, understand your obligations, and move to the next compliance step.