Confirmed law vs draft code
Confirmed law Article 50 transparency obligations become applicable on 2 August 2026. The law is broader than the code and covers multiple transparency scenarios depending on the AI system and the use case.
Draft soft law The Commission published the second draft of the Code of Practice on marking and labelling AI-generated content on 5 March 2026. It is voluntary and mainly supports compliance with Article 50(2) and Article 50(4).
Process Feedback on Draft 2 runs through 30 March 2026, with the final code targeted for May-June 2026.
Too many organisations are collapsing two different things into one bucket: Article 50 law and the draft Code of Practice. That is a mistake. Article 50 is the binding legal text in the AI Act. The Code of Practice is a voluntary implementation aid being developed by the Commission and the AI Office for selected transparency obligations.
Practical takeaway: do not scope your transparency programme only around the Code. Scope it first around the binding Article 50 duties, then use the Code as a supporting operational reference where it actually applies.
What the draft Code actually covers
The official Commission page is unusually clear here. If approved, the final Code will serve as a voluntary tool for providers and deployers of generative AI systems to demonstrate compliance with their respective obligations under Article 50(2) and Article 50(4). In practice, that means the draft Code is centred on the marking and detection of AI-generated content and the labelling of deepfakes and certain AI-generated publications.
That is narrower than the full Article 50 landscape. Article 50 also contains requirements around AI interaction disclosures and certain other transparency situations. So if your teams are building chatbot flows, emotion-recognition use cases, or other transparency-dependent interfaces, this page should not be treated as the whole law.
What Draft 2 changed
Draft 2 is operationally more useful than Draft 1 because it is less theoretical and more implementation-oriented. The current drafting process is structured around two working groups: one focused on marking outputs in a machine-readable and detectable format, and one focused on labelling deepfakes and certain AI-generated publications.
- Machine-readable marking: stronger emphasis on outputs being detectable as artificially generated or manipulated.
- Interoperability and feasibility: technical solutions must be effective, robust, interoperable, and feasible in light of state of the art and implementation cost.
- Cross-cutting coordination: the process also touches the information to be provided to natural persons under Article 50(5), but that does not convert the Code into a complete Article 50 manual.

What your team should do now
1. Separate legal scope from implementation guidance. Build an internal matrix that maps your AI use cases to the actual paragraphs of Article 50 before you map them to any draft code provision.
2. Inventory synthetic-content workflows. Marketing, knowledge-base publishing, customer communications, video/image generation, and public-facing text are the usual blind spots.
3. Test marking and provenance workflows now. The rules apply in August 2026. Waiting for the final Code to begin technical design is lazy planning.
4. Preserve drafting flexibility. Because the Code remains voluntary and in draft form, avoid hard-coding draft-specific assumptions into product logic. Put them in guidance layers, playbooks, or configurable policy notes instead.
What not to do
- Do not present the draft Code as binding law.
- Do not imply it fully covers every Article 50 obligation.
- Do not assume a high-risk classification is required before Article 50 matters. It is a separate transparency track.
For a cleaner legal-operational split, also read our dedicated explainer on Article 50 Code vs Article 50 Law. If you need a practical first-pass assessment, use the Transparency Validator and the 12-question Compliance Checker.
About the author: Abhishek G Sharma is the founder of Move78 International Limited. He holds ISO 42001 Lead Auditor, CISA, CISM, CRISC, and CEH certifications. He brings over 20 years of practitioner experience in cybersecurity, AI governance, and enterprise risk management.
Disclaimer: This analysis is for educational purposes only and does not constitute legal advice. Consult qualified counsel for binding compliance decisions. Last updated: March 2026.
