Back to Insights
AI Governance10 min read15 July 2025

EU AI Act High-Risk Classification: A Plain-English Guide for Business Leaders

The EU AI Act is live. If your organisation deploys AI systems in the EU, some of them may already be classified as high-risk — triggering significant compliance obligations. Here's what high-risk means, how to identify it, and what it requires.

AA

Agraj Agranayak

Founder & CEO, Imagine Works · About · LinkedIn

Key Takeaways

  • The EU AI Act (Regulation 2024/1689) entered into force on 1 August 2024 and applies to any organisation deploying AI for EU-based users — regardless of where it is headquartered.
  • High-risk AI systems include AI used in HR/recruitment, credit scoring, educational access, essential services, and critical infrastructure.
  • High-risk compliance requires a risk management system, technical documentation, audit logging, human oversight design, and a conformity assessment before deployment.
  • Penalties reach €35 million or 7% of global annual turnover for the most serious violations.
  • The August 2026 deadline is the critical enforcement date for high-risk system obligations. Organisations starting now have a realistic but not comfortable timeline.

The EU AI Act (Regulation 2024/1689) entered into force on 1 August 2024, twenty days after its publication in the Official Journal of the European Union. It is the world's first comprehensive legal framework for artificial intelligence, and it applies to any organisation that deploys AI systems within the European Union — regardless of where that organisation is headquartered.

The Act classifies AI systems into four risk tiers. Understanding where your systems fall is the first step to compliance.

The Four Risk Tiers

Visual Reference

EU AI Act — Four Risk Tiers

Regulation 2024/1689 · Entered into force 1 August 2024

1

Unacceptable Risk

Prohibited

Banned from 2 Feb 2025

Social scoring · Subliminal manipulation · Real-time biometric surveillance

2

High Risk

Permitted with obligations

Obligations from 2 Aug 2026

HR & recruitment · Credit scoring · Education access · Critical infrastructure

3

Limited Risk

Transparency required

Obligations from 2 Aug 2026

Chatbots · AI-generated content · Emotion recognition

4

Minimal Risk

No specific obligations

No deadline

General productivity tools · AI-assisted drafting · Spam filters

Source: EU AI Act (Regulation 2024/1689), Annexes I–III. Bars indicate relative scope — most enterprise organisations have systems in multiple tiers.

Unacceptable Risk — Prohibited entirely. Includes AI systems that manipulate human behaviour through subliminal techniques, real-time biometric surveillance in public spaces (with narrow exceptions), and social scoring systems operated by public authorities. These were banned from 2 February 2025.

High Risk — The most consequential tier for most organisations. High-risk systems are permitted but subject to significant obligations before deployment and throughout their operational lifecycle.

Limited Risk — Lighter obligations, primarily transparency. AI systems that interact with humans — including chatbots and AI-generated media — must be clearly identified as AI.

Minimal Risk — No specific obligations under the Act. Most general-purpose productivity tools fall here.

What Counts as High-Risk?

High-risk systems are defined in Annex III of the Act. The categories most relevant to enterprise organisations include:

  • HR and employment: AI used in recruitment, CV screening, interview assessment, promotion decisions, or task allocation
  • Credit scoring: AI that evaluates creditworthiness or sets credit limits
  • Educational access: AI that determines access to educational institutions or evaluates students
  • Essential services: AI used in granting access to utilities, public services, or benefits
  • Law enforcement (if applicable): biometric systems, crime prediction, evidence evaluation
  • Critical infrastructure: AI managing energy, water, transport, or financial markets

If your organisation uses AI in any of these areas and deploys it for EU-based individuals, you likely have high-risk systems in your portfolio.

What High-Risk Compliance Requires

For each high-risk system, the Act mandates:

  1. 1Risk Management System — A documented, ongoing process for identifying and mitigating risks associated with the system throughout its lifecycle
  2. 2Data Governance — Training data must be documented, relevant, representative, and free from errors that could produce discriminatory outputs
  3. 3Technical Documentation — A complete technical file describing the system's design, data, testing methodology, and performance benchmarks
  4. 4Logging and Audit Trail — Automatic logs sufficient to reconstruct system behaviour and decisions post-incident
  5. 5Transparency — Clear instructions for use; deployers and affected persons must be able to understand system outputs
  6. 6Human Oversight — Design features that enable human intervention, correction, and override of system decisions
  7. 7Accuracy and Robustness — Demonstrated performance levels with safeguards against errors, faults, and inconsistencies
  8. 8Conformity Assessment — Self-assessment or third-party audit before deployment, depending on system category

What Are the Penalties?

Non-compliance carries material financial risk. Under Article 99 of the Act, maximum fines are set at:

  • €35 million or 7% of global annual turnover for violations involving prohibited AI practices
  • €15 million or 3% of global annual turnover for violations of high-risk system obligations
  • €7.5 million or 1.5% of global annual turnover for providing incorrect information to enforcement authorities

These are maximum figures. Enforcement bodies retain discretion in application. However, organisations that have taken no classification or governance action before the compliance deadlines face the greatest exposure.

Does This Apply If You're Not Based in the EU?

Yes. The Act operates on a deployment basis, not an establishment basis. If your AI system is used by people in the EU — including EU-based employees of a non-EU company — the Act applies.

This is directly relevant for Indian organisations serving European clients, operating EU subsidiaries, or deploying HR or credit tools used by EU-based staff. The relevant jurisdiction is determined by where the user is located, not where the system was built or where the company is registered.

The Compliance Timeline

Key enforcement dates under the Act's phased schedule:

  • 2 February 2025 — Prohibited practices banned
  • 2 August 2025 — General Purpose AI (GPAI) model obligations apply
  • 2 August 2026 — High-risk system obligations fully enforceable
  • 2 August 2027 — High-risk systems embedded in regulated products (medical devices, machinery, etc.)

The August 2026 deadline is the critical one for most enterprise organisations. Organisations starting compliance work in 2025 have a realistic but not comfortable timeline.

Starting Your Compliance Journey

The first step is classification: building a complete inventory of your AI systems and mapping each one to the Act's risk tiers. This AI Portfolio Assessment is the foundation of any compliance programme.

Without it, you cannot know your obligations. With it, you have a clear picture of where to focus effort and in what sequence. From the classification, you build a compliance roadmap — prioritising high-risk systems, assigning accountability, and designing the governance structures each system requires.

The organisations that will struggle most are those treating the EU AI Act as a legal sign-off exercise rather than an operational one. Compliance requires embedding new processes into how AI systems are built, monitored, and governed — not just producing documentation at the end.

Imagine Works designs EU AI Act compliance frameworks for enterprise organisations. Book a governance discovery call to discuss your portfolio.

Related Service

AI Governance & Risk Design

Designing the governance framework and risk architecture that keeps your AI systems compliant, auditable, and board-ready — before regulation forces the issue.

Explore this service