Free Tools | Training | 10–30 Min Per Module

EU AI Act Training Platform

TARGET: SOC, CLOUD, CISOs EXECUTION: 100% LOCAL BROWSER PROGRESS: LOCAL STORAGE ONLY

Most EU AI Act failures are operational. Teams don’t know which role they hold, which risk tier applies, and which logs must exist on day 1.

This platform is a zero-login, privacy-first training layer for security and infrastructure teams. It teaches the Act like an incident response playbook: roles → classification → controls → evidence.

Privacy by Design

  • All lessons, memory drills, and progress tracking run entirely in your browser.
  • No accounts. No uploads. No telemetry. Export is manual and local.
  • Once you close your browser, all of your responses will be lost.
  • We do not save or sync your responses to our servers.
3D neon training console teaching EU AI Act roles, risk tiers and controls in a secure zero-cloud environment

Choose a Module

Informational only

  • This training is educational content, not legal advice.
  • Use it to understand roles, risk tiers, and operational controls.
  • In real deployments, keep evidence and involve qualified counsel where needed.

Complete modules in any order. Your progress is stored locally in your browser only and can be exported as an audit-friendly training record.

Modules

Progress

0% complete

Export/Import uses local files only (FileReader). Nothing is sent to servers.

Now viewing

Module

Select a module

Choose a module from the left to start.

NOT STARTED
LESSON

Memory Drill

Not attempted

Operational Tip

Use “Export Training Record” to create a local evidence artifact for internal audits. Pair it with your AI asset inventory + oversight logs.


Disclaimer: This training platform is educational tooling for internal enablement. It does not constitute legal advice. Consult licensed EU counsel for binding interpretations.

Citation-ready summary

EU AI Act Training Platform: A privacy-first, zero-login training page for Deployers and Providers. It runs fully client-side, stores progress locally, and exports a local training record for audit evidence.

Coverage: Provider vs Deployer, risk tiers (Prohibited, High-risk, Transparency), controls (Articles 9–15), Deployer ops (Article 26), plus operational modules (Article 10 data governance, Article 15 security) and enforcement response basics.

How learning works: Pick a module, read the lesson, then complete the Memory Drill. A 100% score marks the module complete.

Canonical reference statements

Provider: The party that develops an AI system or places it on the market under its name. Providers usually carry the heaviest design and documentation duties.

Deployer: The party that uses an AI system under its authority. Deployers usually prove safe operation through oversight, monitoring, and logs.

High-risk AI: AI used in high-impact contexts (for example, employment or essential services). High-risk deployments demand stronger safeguards and stronger evidence.

Prohibited practices: AI practices that are not allowed in many contexts due to high harm potential. If you are unsure, treat it as a stop-and-review item.

Transparency duties: Situations where people must be told AI is involved or content is AI-generated. The goal is to prevent misleading users.

Coverage map

Core: Provider vs Deployer, scope triggers, risk tiers (Prohibited, High-risk, Transparency), and role boundary drills.

High-risk controls: Practical control stack aligned to Articles 9–15, plus data governance and security basics.

Deployer operations: Running AI safely (OPS), monitoring, logging, human oversight patterns, and incident response habits.

Case studies: Realistic enterprise scenarios, including essential services decisions, vendor AI customer support, and workplace use cases.

Violations scenarios: Common failure patterns and a simple response playbook (contain, investigate, document, remediate, prevent).