AI Procurement: What to Demand in a Vendor's Governance Documentation
When organisations procure traditional software, the governance due diligence checklist is mature. AI procurement is different — the systems are not deterministic, their outputs depend on training data and deployment context the buyer does not control, and the consequences of inadequate due diligence are higher. Here is what to ask.
Key Takeaways
- AI procurement due diligence must go beyond standard software SLAs — AI systems are non-deterministic, their outputs depend on training data and deployment context the buyer does not control.
- Nine key governance demands: model documentation, data provenance, EU AI Act classification, performance disparities, logging capability, human override, change management, incident response, and data processing terms.
- Absence of model documentation from a vendor means the organisation is deploying a system it does not understand — this is not a minor gap.
- Vendors who cannot answer governance questions without lengthy escalation typically have not operationalised AI governance themselves — that residual risk lands with the buyer.
- An AI system whose governance documentation cannot be reviewed is not an AI system that can be governed. Contractual commitments are not a substitute for operational transparency.
When an organisation procures traditional software, the governance due diligence is well-established: security review, data processing agreements, SLA terms, exit provisions. The checklist is mature and the risks are understood.
AI procurement is different. The systems are not deterministic — their outputs depend on training data, model architecture, deployment context, and ongoing updates that the purchasing organisation does not control. The governance questions are more complex, and the consequences of inadequate due diligence are higher.
A growing number of enterprise organisations are discovering this after procurement — when an AI system produces a discriminatory output, creates a compliance gap, or proves impossible to audit. The time to address governance in AI procurement is before the contract is signed.
What to Demand From AI Vendors
Procurement Reference
AI Vendor Governance — 9-Point Due Diligence Checklist
Ask every question before signing. Inability to answer is itself material information.
Model documentation
Training data, performance benchmarks across populations, known limitations, intended use cases
Data provenance & consent
DPDP Act / GDPR compliance for training data; copyright liability on training content
EU AI Act classification
Risk tier assessment; conformity assessment if high-risk; compliance documentation
Performance disparities
Accuracy across demographic groups, languages, input types — disparity data as standard deliverable
Logging & audit capability
Can reconstruct decision process after the fact; granularity sufficient for regulatory audit
Human override capability
Usable in practice, not just nominal; can a reviewer override without significant technical effort?
Change management
Notice before material model updates; impact assessment; organisation's compliance position protected
Incident response SLAs
Defined process for harmful outputs; SLA commitments; escalation path for serious incidents
Data processing terms
Input data used for training? Data residency requirements; sub-processor approval rights
Contractual commitments are not a substitute for operational transparency. A vendor can commit to standards they cannot demonstrate.
1. Model documentation. Can the vendor provide technical documentation covering training data sources, performance benchmarks across different populations, known limitations, and intended use cases? Absence means the organisation is deploying a system it does not understand.
2. Data provenance and consent. What data was used to train the model? Is it processed in accordance with applicable data protection law, including India's DPDP Act and the EU's GDPR? Are there copyright claims on training data that could create downstream liability?
3. EU AI Act classification. Has the vendor assessed the system's risk classification under the EU AI Act? If the system falls into a high-risk category, what compliance documentation exists? Has a conformity assessment been completed?
4. Performance disparities. Does the system perform consistently across different demographic groups, languages, and input types? Performance disparity data — showing how accuracy varies across populations — should be a standard deliverable for any AI used in consequential decisions.
5. Logging and audit capability. Can the system produce logs sufficient to reconstruct its decision process after the fact? This is required for EU AI Act compliance on high-risk systems and increasingly required by internal governance standards.
6. Human override capability. Can a human review and override the system's output in normal operation? Is this usable in practice, or is it a nominal feature requiring significant technical effort to invoke?
7. Update and change management. How does the vendor handle model updates? Will an update change system behaviour in ways that affect the organisation's compliance position? What notice is provided before material changes?
8. Incident response SLAs. What is the vendor's process for reporting and remediating AI incidents — outputs that cause harm, discriminate, or produce systematic errors? What are the SLA commitments?
9. Data processing terms. What happens to the data the organisation inputs into the system? Is it used for training? Under what conditions? What are the data residency requirements?
Red Flags in Vendor Responses
Vendors who cannot answer these questions without lengthy escalation typically have not operationalised AI governance themselves. That is not necessarily disqualifying — but it is important information about the residual risk the buying organisation will need to manage independently.
Vendors who actively resist transparency questions, claim proprietary constraints prevent disclosure, or provide inconsistent answers represent a higher risk category. An AI system whose governance documentation cannot be reviewed is not an AI system that can be governed.
Contractual commitments are not a substitute for operational transparency. A vendor can commit contractually to standards they cannot demonstrate — and that commitment will not protect the organisation when a governance failure occurs.
Imagine Works advises enterprise organisations on AI governance and procurement due diligence. Get in touch to discuss your vendor assessment framework.
Related Service
AI Governance & Risk Design
Designing the governance framework and risk architecture that keeps your AI systems compliant, auditable, and board-ready — before regulation forces the issue.
Explore this serviceMore Insights
More on AI Governance
How to Design an AI Incident Response Process
AI incidents are not IT incidents. When a system produces a wrong, discriminatory, or harmful output systematically, the incident may have been occurring for weeks before anyone notices, the harm distributed across thousands of individuals, and the cause difficult to isolate. AI incident response requires its own framework.
What Is an AI Model Card — and Why Every Enterprise AI System Needs One
Every AI system has a design history: what data it was trained on, what it was optimised for, where it performs well and where it does not. Almost none of this is documented in a way that the people operating or affected by the system can access. A model card changes that.
General-Purpose AI Models and the EU AI Act: What the August 2025 Obligations Mean
The EU AI Act's General-Purpose AI provisions became enforceable in August 2025. For organisations using foundation model APIs, fine-tuning GPAI models, or building products on large language models, the obligations are direct and material. Here is what changed and what it requires.