General-Purpose AI Models and the EU AI Act: What the August 2025 Obligations Mean
The EU AI Act's General-Purpose AI provisions became enforceable in August 2025. For organisations using foundation model APIs, fine-tuning GPAI models, or building products on large language models, the obligations are direct and material. Here is what changed and what it requires.
Key Takeaways
- GPAI model obligations under the EU AI Act became enforceable from 2 August 2025 — the first major post-prohibition deadline under the Act.
- Two tiers: standard GPAI models (transparency and documentation obligations) and systemic-risk models (trained above 10²⁵ FLOPs — red-teaming, incident reporting, cybersecurity).
- Most enterprise organisations are GPAI deployers, not providers — but fine-tuning a GPAI model for distribution may change that characterisation.
- Deployers must verify that their GPAI providers have met documentation obligations — and integrate that documentation into their own risk assessments.
- An organisation's application-level high-risk classification is independent of GPAI compliance — both can apply simultaneously.
When most organisations think about EU AI Act compliance, they focus on the systems they deploy: tools used in HR, customer-facing decisions, or operational processes. What many have not yet addressed is their use of general-purpose AI models — and the distinct compliance obligations that became enforceable from August 2025.
The EU AI Act introduces a specific regulatory regime for General-Purpose AI (GPAI) models. For organisations accessing GPAI via API, fine-tuning foundation models, or deploying products built on GPAI, the implications are direct and material.
What Is a General-Purpose AI Model?
Under the Act, a GPAI model is an AI model trained on broad data at significant scale, capable of competently performing a wide range of distinct tasks, and intended for release for use in many downstream systems and applications. In practice, this describes the major foundation models — large language models, multimodal models, and other frontier systems — provided by OpenAI, Google, Anthropic, Meta, Mistral, and others.
The regulation distinguishes between two tiers based on computational scale.
The Two-Tier GPAI Framework
Regulatory Reference
GPAI Models — Two-Tier Obligation Framework
EU AI Act (Regulation 2024/1689) · Enforceable from 2 August 2025
Systemic Risk GPAI Models
Training compute > 10²⁵ FLOPs
* First 4 obligations apply to all GPAI models. Last 4 are additional for systemic-risk tier.
Standard GPAI Models
Training compute ≤ 10²⁵ FLOPs
Deployer vs Provider — the critical distinction
You are a Deployer if you…
Access GPAI via API and integrate it into your own products or internal tools.
→ Provider carries primary GPAI compliance burden
You may be a Provider if you…
Fine-tune a GPAI model and make it available — even internally across subsidiaries.
→ Seek legal advice on your classification
Application-level high-risk classification is independent of GPAI compliance — both frameworks can apply to the same system simultaneously.
Standard GPAI Models — All GPAI models must meet a baseline set of obligations: technical documentation covering model architecture and training, transparency to downstream deployers, a copyright compliance policy for training data, and a summary of training data content sufficient for deployers to understand the system's capabilities.
Systemic Risk GPAI Models — Models trained with cumulative computational power exceeding 10²⁵ floating-point operations (FLOPs) carry additional obligations: adversarial testing (red-teaming) before deployment, incident reporting to the EU AI Office, cybersecurity safeguards, and energy consumption reporting. This threshold currently encompasses the largest frontier models from the major providers.
Are You a Deployer or a Provider?
The critical distinction under the GPAI framework is whether your organisation is a provider (releasing a GPAI model for use by others) or a deployer (using a GPAI model as part of your own system or service).
Most enterprise organisations are deployers. They access GPAI via API and integrate it into internal tools, customer products, or decision-support systems. As a deployer, your primary obligation is to comply with the terms and documentation provided by the GPAI model provider. The provider carries the primary GPAI compliance burden.
However, if your organisation fine-tunes a GPAI model and makes it available — even internally across subsidiaries — you may be considered a downstream provider with corresponding obligations. Legal advice on this characterisation is important for organisations running significant fine-tuning programmes.
Practical Implications for Enterprise Organisations
The August 2025 deadline means GPAI obligations are now enforceable. For enterprise AI programmes, the priorities are:
- Inventory your GPAI dependencies. Identify every GPAI model your organisation accesses — directly or through third-party products. Map these against the provider's published compliance documentation.
- Verify provider documentation. GPAI providers are required to publish technical documentation. This should inform your own system-level risk assessments and technical documentation.
- Assess downstream risk. If you are building products on GPAI that serve EU-based users, your application-level obligations under the Act — including any high-risk classification of your application — are not reduced because the underlying model is GPAI-compliant. Both frameworks apply independently.
- Review fine-tuning scope. If your organisation fine-tunes models for internal deployment, legal characterisation of your role under the Act is essential before the next compliance deadline.
What This Does Not Replace
GPAI compliance is not a substitute for application-level compliance. If your organisation uses a GPAI model as the foundation for a high-risk AI system — one used in HR, credit, educational access, or critical infrastructure — the high-risk obligations of the Act still apply to your system. You are responsible for the system-level risk management, documentation, HITL design, and conformity assessment requirements that the Act mandates, even if the underlying GPAI model is fully compliant.
Imagine Works designs EU AI Act compliance frameworks for enterprise organisations. Book a governance discovery call to discuss your GPAI and high-risk system obligations.
Related Service
AI Governance & Risk Design
Designing the governance framework and risk architecture that keeps your AI systems compliant, auditable, and board-ready — before regulation forces the issue.
Explore this serviceMore Insights
More on AI Governance
How to Design an AI Incident Response Process
AI incidents are not IT incidents. When a system produces a wrong, discriminatory, or harmful output systematically, the incident may have been occurring for weeks before anyone notices, the harm distributed across thousands of individuals, and the cause difficult to isolate. AI incident response requires its own framework.
AI Procurement: What to Demand in a Vendor's Governance Documentation
When organisations procure traditional software, the governance due diligence checklist is mature. AI procurement is different — the systems are not deterministic, their outputs depend on training data and deployment context the buyer does not control, and the consequences of inadequate due diligence are higher. Here is what to ask.
What Is an AI Model Card — and Why Every Enterprise AI System Needs One
Every AI system has a design history: what data it was trained on, what it was optimised for, where it performs well and where it does not. Almost none of this is documented in a way that the people operating or affected by the system can access. A model card changes that.