EU AI Act High-Risk Classification: A Plain-English Guide for Business Leaders
The EU AI Act is live. If your organisation deploys AI systems in the EU, some of them may already be classified as high-risk — triggering significant compliance obligations. Here's what high-risk means, how to identify it, and what it requires.
Key Takeaways
- The EU AI Act (Regulation 2024/1689) entered into force on 1 August 2024 and applies to any organisation deploying AI for EU-based users — regardless of where it is headquartered.
- High-risk AI systems include AI used in HR/recruitment, credit scoring, educational access, essential services, and critical infrastructure.
- High-risk compliance requires a risk management system, technical documentation, audit logging, human oversight design, and a conformity assessment before deployment.
- Penalties reach €35 million or 7% of global annual turnover for the most serious violations.
- The August 2026 deadline is the critical enforcement date for high-risk system obligations. Organisations starting now have a realistic but not comfortable timeline.
The EU AI Act (Regulation 2024/1689) entered into force on 1 August 2024, twenty days after its publication in the Official Journal of the European Union. It is the world's first comprehensive legal framework for artificial intelligence, and it applies to any organisation that deploys AI systems within the European Union — regardless of where that organisation is headquartered.
The Act classifies AI systems into four risk tiers. Understanding where your systems fall is the first step to compliance.
The Four Risk Tiers
Visual Reference
EU AI Act — Four Risk Tiers
Regulation 2024/1689 · Entered into force 1 August 2024
Unacceptable Risk
Prohibited
Social scoring · Subliminal manipulation · Real-time biometric surveillance
High Risk
Permitted with obligations
HR & recruitment · Credit scoring · Education access · Critical infrastructure
Limited Risk
Transparency required
Chatbots · AI-generated content · Emotion recognition
Minimal Risk
No specific obligations
General productivity tools · AI-assisted drafting · Spam filters
Source: EU AI Act (Regulation 2024/1689), Annexes I–III. Bars indicate relative scope — most enterprise organisations have systems in multiple tiers.
Unacceptable Risk — Prohibited entirely. Includes AI systems that manipulate human behaviour through subliminal techniques, real-time biometric surveillance in public spaces (with narrow exceptions), and social scoring systems operated by public authorities. These were banned from 2 February 2025.
High Risk — The most consequential tier for most organisations. High-risk systems are permitted but subject to significant obligations before deployment and throughout their operational lifecycle.
Limited Risk — Lighter obligations, primarily transparency. AI systems that interact with humans — including chatbots and AI-generated media — must be clearly identified as AI.
Minimal Risk — No specific obligations under the Act. Most general-purpose productivity tools fall here.
What Counts as High-Risk?
High-risk systems are defined in Annex III of the Act. The categories most relevant to enterprise organisations include:
- HR and employment: AI used in recruitment, CV screening, interview assessment, promotion decisions, or task allocation
- Credit scoring: AI that evaluates creditworthiness or sets credit limits
- Educational access: AI that determines access to educational institutions or evaluates students
- Essential services: AI used in granting access to utilities, public services, or benefits
- Law enforcement (if applicable): biometric systems, crime prediction, evidence evaluation
- Critical infrastructure: AI managing energy, water, transport, or financial markets
If your organisation uses AI in any of these areas and deploys it for EU-based individuals, you likely have high-risk systems in your portfolio.
What High-Risk Compliance Requires
For each high-risk system, the Act mandates:
- 1Risk Management System — A documented, ongoing process for identifying and mitigating risks associated with the system throughout its lifecycle
- 2Data Governance — Training data must be documented, relevant, representative, and free from errors that could produce discriminatory outputs
- 3Technical Documentation — A complete technical file describing the system's design, data, testing methodology, and performance benchmarks
- 4Logging and Audit Trail — Automatic logs sufficient to reconstruct system behaviour and decisions post-incident
- 5Transparency — Clear instructions for use; deployers and affected persons must be able to understand system outputs
- 6Human Oversight — Design features that enable human intervention, correction, and override of system decisions
- 7Accuracy and Robustness — Demonstrated performance levels with safeguards against errors, faults, and inconsistencies
- 8Conformity Assessment — Self-assessment or third-party audit before deployment, depending on system category
What Are the Penalties?
Non-compliance carries material financial risk. Under Article 99 of the Act, maximum fines are set at:
- €35 million or 7% of global annual turnover for violations involving prohibited AI practices
- €15 million or 3% of global annual turnover for violations of high-risk system obligations
- €7.5 million or 1.5% of global annual turnover for providing incorrect information to enforcement authorities
These are maximum figures. Enforcement bodies retain discretion in application. However, organisations that have taken no classification or governance action before the compliance deadlines face the greatest exposure.
Does This Apply If You're Not Based in the EU?
Yes. The Act operates on a deployment basis, not an establishment basis. If your AI system is used by people in the EU — including EU-based employees of a non-EU company — the Act applies.
This is directly relevant for Indian organisations serving European clients, operating EU subsidiaries, or deploying HR or credit tools used by EU-based staff. The relevant jurisdiction is determined by where the user is located, not where the system was built or where the company is registered.
The Compliance Timeline
Key enforcement dates under the Act's phased schedule:
- 2 February 2025 — Prohibited practices banned
- 2 August 2025 — General Purpose AI (GPAI) model obligations apply
- 2 August 2026 — High-risk system obligations fully enforceable
- 2 August 2027 — High-risk systems embedded in regulated products (medical devices, machinery, etc.)
The August 2026 deadline is the critical one for most enterprise organisations. Organisations starting compliance work in 2025 have a realistic but not comfortable timeline.
Starting Your Compliance Journey
The first step is classification: building a complete inventory of your AI systems and mapping each one to the Act's risk tiers. This AI Portfolio Assessment is the foundation of any compliance programme.
Without it, you cannot know your obligations. With it, you have a clear picture of where to focus effort and in what sequence. From the classification, you build a compliance roadmap — prioritising high-risk systems, assigning accountability, and designing the governance structures each system requires.
The organisations that will struggle most are those treating the EU AI Act as a legal sign-off exercise rather than an operational one. Compliance requires embedding new processes into how AI systems are built, monitored, and governed — not just producing documentation at the end.
Imagine Works designs EU AI Act compliance frameworks for enterprise organisations. Book a governance discovery call to discuss your portfolio.
Related Service
AI Governance & Risk Design
Designing the governance framework and risk architecture that keeps your AI systems compliant, auditable, and board-ready — before regulation forces the issue.
Explore this serviceMore Insights
More on AI Governance
How to Design an AI Incident Response Process
AI incidents are not IT incidents. When a system produces a wrong, discriminatory, or harmful output systematically, the incident may have been occurring for weeks before anyone notices, the harm distributed across thousands of individuals, and the cause difficult to isolate. AI incident response requires its own framework.
AI Procurement: What to Demand in a Vendor's Governance Documentation
When organisations procure traditional software, the governance due diligence checklist is mature. AI procurement is different — the systems are not deterministic, their outputs depend on training data and deployment context the buyer does not control, and the consequences of inadequate due diligence are higher. Here is what to ask.
What Is an AI Model Card — and Why Every Enterprise AI System Needs One
Every AI system has a design history: what data it was trained on, what it was optimised for, where it performs well and where it does not. Almost none of this is documented in a way that the people operating or affected by the system can access. A model card changes that.