Provider evidence starter

EU AI Act Post-Market Monitoring Plan Starter

A practical Article 72 evidence starter for defining how high-risk AI providers monitor real-world performance, collect signals, review issues, and retain corrective-action records.

Free · No login · Local download · Educational only

XLSX worksheet

Post-market monitoring plan workbook

Define monitoring signals, thresholds, owners, issue triage, corrective actions, and escalation evidence.

Download XLSX worksheet →
Markdown companion

Post-market monitoring markdown outline

Use this outline to draft the monitoring plan narrative before converting it into your controlled documentation system.

Download Markdown →

What post-market monitoring means in practice

Post-market monitoring is the provider-side system for collecting, reviewing, and acting on information about high-risk AI system performance after release or deployment. Under Article 72, the monitoring system is based on a post-market monitoring plan that forms part of the technical documentation referred to in Annex IV.

Monitoring plan starter map

Plan elementOperational questionEvidence to retain
Signal sourcesWhere will the team detect degraded performance, misuse, complaints, incidents, or unexpected outcomes?Helpdesk records, deployer feedback, logs, audit reports, model performance reviews.
ThresholdsWhich events trigger investigation, corrective action, suspension, reporting, or legal review?Threshold register, escalation rules, risk acceptance records.
OwnershipWho reviews signals, who decides action, and who signs off closure?RACI, review calendar, owner log, sign-off trail.
Corrective actionHow are issues investigated, remediated, retested, communicated, and retained?Corrective-action tickets, retest evidence, release notes, communications.
Incident escalationWhich events may require serious-incident review or regulator-facing advice?Incident register, triage notes, legal review request, notification decision record.

Minimum workflow to implement before formal review

  1. Define signal sources. Document how you will collect operational feedback, logs, test results, drift indicators, complaints, and deployer communications.
  2. Set review cadence and thresholds. Decide what gets reviewed daily, weekly, monthly, or event-driven, and what triggers escalation.
  3. Connect monitoring to risk management. Map each recurring issue back to the risk register, technical documentation, and corrective-action process.
  4. Retain closure evidence. Keep records showing who reviewed the signal, what decision was made, what action was taken, and whether the fix was verified.

What this starter does not cover

This starter does not decide whether an event is legally reportable, whether a serious incident has occurred, or whether a system must be withdrawn. It is a planning and evidence structure that should be reviewed with qualified legal, risk, product, and conformity specialists.

Related EU AI Compass tools

FAQ

Is this only for providers?

Primarily, yes. Post-market monitoring under Article 72 is a provider-side obligation for high-risk AI systems. Deployers can still use the structure to understand what evidence they should request from providers or escalate internally.

Does this decide whether an incident is reportable?

No. It helps collect facts and define escalation rules. Reportability decisions should be reviewed against the applicable legal text, sector obligations, and qualified advice.

How often should the plan be updated?

Update it when the system, intended purpose, deployment context, performance profile, incident history, user group, or legal guidance changes materially. A fixed review cadence should also be documented.

Source and review note

This page is educational and should be reviewed against Regulation (EU) 2024/1689, European Commission materials, national authority guidance, sector rules, and qualified legal or conformity-assessment advice where relevant. It does not confirm legal compliance and is not legal advice.