Back to Insights
AI Strategy8 min read15 October 2025

How to Assess Your Organisation's AI Maturity

Most organisations do not know where they actually are on the AI maturity curve. Without an honest assessment, investment decisions are made in the wrong order and operating models are designed for an organisation that does not yet exist. Here is how to assess AI maturity accurately.

AA

Agraj Agranayak

Founder & CEO, Imagine Works · About · LinkedIn

Key Takeaways

  • AI maturity runs from Exploring (no deployments) through Experimenting, Scaling, Embedding, to Leading — each level has distinct characteristics and investment requirements.
  • The hardest transition is Level 3 to Level 4: moving from project-by-project AI investment to managing AI as an operational function.
  • Organisations often overestimate maturity by conflating AI activity with AI capability — having tools deployed is not the same as having the operating model to govern them.
  • Governance maturity is the most revealing dimension: an organisation can be at Level 3 in deployment and Level 1 in governance simultaneously.
  • The maturity assessment output should be a prioritised capability gap list, not a score — the score matters far less than what needs to change.

Most organisations approaching AI strategy face the same diagnostic problem: they do not know where they actually are. Without an honest assessment of current AI maturity, investment decisions are made in the wrong order, capability gaps go unaddressed, and operating model designs are built for an organisation that does not yet exist.

AI maturity assessment is not a compliance exercise. It is a strategic tool that tells leadership what the organisation can and cannot do with AI today — and what must be built or addressed before the next stage of ambition is achievable.

The Five Levels of AI Maturity

Framework Reference

The Five Levels of AI Maturity

Bars indicate scope of AI capability at each level

1

Exploring

Aware of AI but no active deployments. No strategy, operating model, or governance.

AI discussed at senior levelNo funded AI programmeInformal tool experimentation
2

Experimenting

Isolated pilots in one or more functions. No coordination, no shared infrastructure, no governance.

Vendor-led pilotsBusiness-unit self-fundingNo cross-business coordination
3

Scaling

Multiple AI systems in production. Some governance but still project-by-project investment. Duplication common.

AI in productionGrowing governance gapsNo portfolio management
4

Embedding

AI Operating Model in place. Portfolio-managed investment. Governance operational, not aspirational.

AI Operating Model definedPortfolio governance activeRegulatory compliance addressed
5

Leading

AI is a strategic differentiator. Proprietary capabilities, mature governance, continuous improvement culture.

Proprietary AI capabilitiesMature governance functionAI shapes strategy

The hardest transition is Level 3 → 4: from project-by-project AI to managed operational capability.

Level 1 — Exploring. The organisation is aware of AI but has no active deployments. Individual teams may be experimenting with commercial tools informally. There is no AI strategy, no operating model, and no governance framework. AI is discussed at senior level but not yet funded or directed.

Level 2 — Experimenting. Isolated AI pilots exist in one or more business functions. These are typically self-funded by business units, vendor-led, and evaluated on narrow metrics. There is no coordination between pilots, no shared infrastructure, and no governance. The organisation learns from individual experiments but does not yet compound that learning.

Level 3 — Scaling. The organisation has moved beyond pilots. Multiple AI systems are in production, with some governance and measurement in place. However, AI investments are still made project by project rather than as a portfolio. Duplication is common. The operating model is not yet designed for AI at scale.

Level 4 — Embedding. AI is embedded in core business processes. The organisation has an AI operating model, a governance framework, and a defined approach to workforce transition. AI investment decisions are portfolio-managed. Governance is operational, not aspirational.

Level 5 — Leading. AI is a genuine operational and strategic differentiator. The organisation has proprietary AI capabilities, a mature governance function, and a culture of continuous AI improvement. Leadership actively shapes AI strategy rather than responding to it.

The Most Revealing Dimension: Governance

Maturity assessments are most valuable when they are honest about gaps rather than aspirational about capabilities. Organisations frequently overestimate their maturity by conflating AI activity with AI capability.

The most revealing dimension of any maturity assessment is governance. An organisation can have significant AI activity — Level 3 in terms of deployment — while being at Level 1 in terms of governance. This means it has operational risk it cannot see and cannot manage. For regulatory purposes, particularly under the EU AI Act, it may also have compliance obligations it has not yet assessed.

What Drives the Gap Between Levels

The transitions between maturity levels share a consistent pattern. Moving from Level 1 to Level 2 requires permission — leadership willingness to fund AI experiments. Moving from Level 2 to Level 3 requires coordination — someone connecting the dots between isolated experiments and building shared capability.

The hardest transition is from Level 3 to Level 4. It requires the organisation to stop managing AI as a series of projects and start managing it as an operational function. This requires deliberate operating model design — the kind of work that cannot be delegated to a technology team or outsourced to a vendor. McKinsey's research consistently finds that this transition — from active AI experimentation to embedded, governed AI capability — is where most enterprise programmes stall.

How to Run an Assessment

A practical AI maturity assessment evaluates five domains:

  1. 1Strategy and governance — Is there a clear AI strategy? Is governance operational or aspirational?
  2. 2Organisational capability — Does the organisation have the skills to manage AI at the next level of ambition?
  3. 3Data and infrastructure — Is the data foundation in place to support the use cases being considered?
  4. 4Use case portfolio — How managed and coordinated is the portfolio of AI applications?
  5. 5Workforce readiness — Has the organisation addressed the human-AI workflow transition?

Scoring each domain honestly produces a maturity profile that makes the right next steps visible. It also prevents the most common investment mistake: funding AI tools before the operating model is ready to use them effectively.

Imagine Works conducts AI maturity assessments for enterprise organisations as the first step of an AI strategy engagement. Get in touch to discuss your current maturity and what needs to change.

Related Service

AI Strategy & Operating Model

Designing the AI strategy, vision, and operating model that aligns your entire organisation — from the boardroom to the workflow layer.

Explore this service