Quick answer

Before deploying high-risk AI, ask the vendor for a deployer-facing evidence pack: intended purpose, role classification, instructions for use, limitations, accuracy and cybersecurity claims, input-data requirements, human oversight measures, logging access, change-control triggers, incident contacts, and conformity evidence. Keep gaps visible. Vendor claims do not replace deployer evidence.

“We are AI Act compliant” is not evidence. It is a claim. Procurement should ask for the file behind the claim.

Why the vendor handoff matters

A deployer is the organisation using an AI system under its authority. A provider is the party that develops or has an AI system developed and places it on the market or puts it into service under its own name or trademark. In procurement language both may be called “the vendor”, but the EU AI Act roles are not interchangeable.

For high-risk AI systems, the vendor handoff matters because the deployer’s Article 26 work depends on the information provided with the system. If your team cannot understand the intended purpose, limitations, oversight controls, log mechanisms, input-data assumptions, and escalation process, it cannot operate the system responsibly or defend its evidence trail.

Practical rule

Do not treat “we are EU AI Act compliant” as evidence. Treat it as a claim that must be supported by documents, roles, controls, and contractual obligations.

Minimum vendor evidence pack

This is the evidence set a deployer should request before signing or before production go-live. It is not a substitute for legal advice, and it is not a claim that every vendor must disclose every internal technical file. It is a practical handoff checklist for deployer due diligence.

Evidence itemWhat to ask the vendor forWhy deployers need it
Role and intended-purpose statementProvider/deployer/importer/distributor role statement, intended purpose, target users, prohibited or unsupported uses.Prevents accidental scope drift and supports your internal role and use-case record.
High-risk classification basisArticle 6 / Annex III reasoning, including any documented basis for “not high-risk” treatment where the system appears to touch Annex III.Stops the procurement team from accepting a classification conclusion without evidence.
Instructions for useClear operating instructions, limitations, expected conditions of use, maintenance needs, hardware/software dependencies, and user responsibilities.Article 26 requires deployers to use high-risk systems according to the instructions for use.
Performance and limitation evidenceAccuracy metrics, robustness expectations, cybersecurity posture, known failure modes, group-specific performance where relevant.Supports operational acceptance, monitoring design, and risk owner sign-off.
Input-data specificationsRequired input fields, data quality assumptions, prohibited inputs, representativeness expectations, and data-preparation instructions.If the deployer controls input data, Article 26 requires relevance and sufficient representativeness for the intended purpose.
Human oversight designRequired reviewer role, intervention points, override options, alert logic, training expectations, escalation path, and automation-bias warnings.Human oversight must be operational, not just a sentence in a policy.
Logging and traceabilityWhat logs are generated, where they live, who controls them, retention configuration, export method, interpretation guidance, and audit access.Article 12 supports traceability, and Article 26 requires deployers to keep logs under their control for at least six months unless other law applies.
Conformity and compliance evidenceConformity assessment status, declaration of conformity where applicable, CE marking status, registration evidence, and a deployer-facing technical documentation summary.Lets procurement distinguish real compliance evidence from sales collateral.
Change-control processRelease notes, notification triggers, model/system update policy, re-testing support, and reassessment triggers for material changes.Silent changes can break classification, oversight, logging, or FRIA/DPIA assumptions.
Incident and post-market handoffIncident reporting contacts, severity thresholds, investigation support, post-market monitoring feedback channel, and response time commitments.Deployers need a clear path when monitoring reveals risk or serious incidents.

Six-step handoff process

Use this process before production deployment, not after the first exception. It gives procurement, legal, privacy, security, and business owners one evidence spine.

  1. Confirm the role and use case. Record whether the vendor is provider, distributor, importer, or another supplier role, then map your deployment context.
  2. Collect the instructions-for-use pack. Reject generic marketing PDFs. Ask for operating instructions tied to intended purpose, limitations, oversight, input data, logs, and maintenance.
  3. Request classification and conformity evidence. Ask for the classification rationale and applicable conformity, declaration, CE marking, or registration evidence.
  4. Validate oversight, logging, and input-data controls. Confirm that your team can operate the controls and retain the evidence in your environment.
  5. Define update and incident handoff obligations. Put change triggers, incident contacts, response windows, and investigation support into the contract or procurement file.
  6. Record internal sign-off and unresolved gaps. Use a risk acceptance decision when evidence is partial. Do not bury unresolved gaps inside procurement notes.
EU AI Act vendor evidence handoff map from procurement request to deployer sign-off
A clean vendor handoff gives deployers a route from commercial evaluation to operational evidence.

Vendor red flags before deployment

No intended-purpose boundary

If the vendor cannot define what the system is and is not designed to do, your risk classification and oversight design are unstable.

No log export or access model

If the deployer cannot access or interpret logs under its control, the monitoring evidence file is weak from day one.

No human override design

If reviewers cannot interpret, challenge, or override outputs, “human in the loop” is a slogan, not oversight.

Silent model or workflow updates

If updates happen without notice, your Article 26, DPIA, FRIA, and contractual assumptions can become stale without anyone noticing.

Contract controls to request

These are not legal clauses. They are procurement requirements your counsel can convert into contract language.

ControlBusiness requirementOwner to review
Evidence deliveryVendor must deliver and maintain the deployer evidence pack before go-live.Procurement + legal + AI governance
Change notificationVendor must notify material changes before deployment where the change affects intended purpose, performance, oversight, logs, or risk controls.Product owner + legal
Incident supportVendor must support incident investigation, causal analysis, and regulator-facing evidence where applicable.Security + legal + compliance
Log accessVendor must define generated logs, retention configuration, export rights, and interpretation support.Security + AI governance
Audit supportVendor must provide reasonable evidence support for audits, regulator questions, and internal governance reviews.Compliance + procurement

Use the free EU AI Compass tools first

Start with the free evidence path. Do not buy a toolkit until you know what gap you are solving.

If the free tools reveal repeated evidence gaps across several systems, E1/E2 can be used later as the implementation layer for structured templates, control mapping, and board-ready evidence packs. Do not skip the diagnostic step.

FAQ: AI vendor due diligence under the EU AI Act

Source and review note

This page was reviewed against official EU AI Act Service Desk pages and Regulation (EU) 2024/1689 source material available as of 2026-04-30. It is operational guidance for evidence planning, not legal advice.

Legal disclaimer: This page is educational and operational guidance only. It does not provide legal advice and does not guarantee EU AI Act compliance. Validate system-specific decisions with qualified legal, privacy, procurement, and security advisers.

About the author: Abhishek G Sharma is the founder of Move78 International Limited and holds ISO 42001 LA, ISO 27001 LA, CISA, CISM, CRISC, CEH, CCSK, CAIGO, and CAIRO certifications.