Back to Insights
AI Strategy9 min read15 March 2026

AI Centre of Excellence: Should You Build One — and What Should It Do?

As enterprise AI programmes mature, a recurring question emerges: who owns AI across the organisation? The AI Centre of Excellence is the most common answer — and one of the most misunderstood and misimplemented structures in enterprise AI. Here is how to get it right.

AA

Agraj Agranayak

Founder & CEO, Imagine Works · About · LinkedIn

Key Takeaways

  • An AI CoE is a standards and oversight function — not an AI implementation team. When it becomes the latter, it turns into a bottleneck that slows AI adoption.
  • Three CoE models: Centralised (high consistency, creates bottlenecks), Federated (high speed, inconsistent governance), Hybrid/Hub-and-Spoke (most mature programmes converge here).
  • Six functions the CoE must own: AI strategy alignment, governance standards, risk oversight, operating model standards, talent development, and knowledge management.
  • Build the CoE when: multiple AI deployments are in production, governance questions emerge that individual business units cannot resolve, and absence of coordination is producing visible duplication.
  • Building a CoE too early produces a team with no mandate. Building one too late means the first task is remediating governance debt — not establishing governance.

As enterprise AI programmes mature from isolated experiments to strategic capability, a recurring organisational question emerges: who owns AI across the organisation? Who is accountable for AI strategy, for the quality of AI governance, and for the cross-business-unit decisions that affect the entire AI portfolio?

For many organisations, the answer is a Centre of Excellence. The AI CoE is a dedicated function that provides centralised expertise, standards, and oversight for AI investment and deployment across the enterprise. It is also one of the most misunderstood and misimplemented structures in enterprise AI.

What an AI CoE Is — and What It Is Not

An AI CoE is not an AI implementation team. It does not build AI systems for business units. When it does, it becomes a bottleneck — a centralised resource that every business unit competes for, and a dependency that slows AI adoption rather than enabling it.

An AI CoE is a standards and oversight function. Its value is in the things that every AI deployment needs but no individual business unit should build from scratch: governance frameworks, risk assessment approaches, operating model standards, talent development, and the institutional knowledge of what AI investments have worked, what has failed, and why.

The Three CoE Models

Framework Reference

AI Centre of Excellence — Three Structural Models

The right model depends on current AI maturity and governance priority

Centralised

Single team owns AI governance and standards enterprise-wide

Advantages

High governance consistencyStrong risk oversightClear accountability

Trade-offs

Central bottleneckSlows business-unit autonomyCapacity constraints

Best for: Early-stage programmes or highly regulated organisations

Federated

AI capability distributed across business units; thin central standards function

Advantages

High speed of adoptionBusiness-unit alignmentDomain expertise

Trade-offs

Inconsistent governanceDuplicated infrastructureStandards drift

Best for: Organisations where AI adoption outpaced governance

Hybrid (Hub & Spoke)

Central hub owns governance and strategy; spokes own implementation within hub standards

Advantages

Governance consistencyBusiness-unit speedShared infrastructure

Trade-offs

Coordination overheadRequires strong hub mandateGovernance negotiation

Best for: Most mature enterprise AI programmes converge here

Six functions the CoE must own (regardless of model)

AI strategy alignment
Governance standards
Risk oversight
Operating model standards
Talent & capability development
Knowledge management

The CoE must not own: delivery of AI projects, vendor management, or operational support — those belong with business units and technology functions.

Centralised CoE — A single team owns AI governance, standards, and oversight for the entire organisation. Business units operate under direct CoE governance. This model produces high consistency and strong governance, but creates a central bottleneck that limits the speed and autonomy of AI adoption at scale. Appropriate for early-stage programmes or organisations with significant regulatory risk requiring consistent governance.

Federated CoE — AI capability is distributed across business units, each maintaining its own AI team. A thin central function sets standards and facilitates knowledge sharing. This model produces high speed and business-unit alignment but often at the cost of governance consistency — business units interpret standards differently and reinvent shared capabilities independently. Common in organisations where AI adoption outpaced governance.

Hybrid (Hub and Spoke) — A central hub owns governance standards, enterprise AI strategy, and shared services. Business-unit spokes own implementation and domain-specific AI development, operating within the standards the hub defines. This model captures most of the benefits of both centralised and federated approaches. Most mature enterprise AI programmes converge on this model as their AI portfolio grows.

The Six Functions the CoE Must Own

Regardless of model, the CoE should own six functions that genuinely require centralisation:

  1. 1AI strategy alignment — ensuring AI investment is consistent with enterprise strategy and sequenced correctly
  2. 2Governance standards — defining risk classification, documentation, and oversight requirements for all AI deployments
  3. 3Risk oversight — maintaining a portfolio-level view of AI risk and escalating material issues to executive leadership
  4. 4Operating model standards — defining how human–AI workflows are designed and what accountability structures are required
  5. 5Talent and capability development — building AI literacy across the organisation and developing the specialist roles the programme needs
  6. 6Knowledge management — capturing and distributing lessons from AI deployments across the organisation

Functions the CoE should not own: delivery of individual AI projects, management of technology vendors, operational support of deployed AI systems. These belong with business units and technology functions — the CoE's role is to set the standards they operate to, not to do the work for them.

When to Build the CoE

The timing of CoE establishment matters. Building too early — before there is meaningful AI activity to govern — produces a team with no mandate and no work. Building too late — when uncoordinated AI deployments have already created governance debt — means the CoE's first task is remediation rather than governance.

The right time is when the organisation has multiple AI deployments in production, when governance questions are emerging that individual business units cannot resolve alone, and when absence of coordination is visibly producing duplicated effort or inconsistent standards. For most large enterprise organisations in 2026, that moment has arrived or is imminent.

Imagine Works designs AI operating models and Centre of Excellence structures for enterprise organisations. Get in touch to discuss your AI governance structure.

Related Service

AI Strategy & Operating Model

Designing the AI strategy, vision, and operating model that aligns your entire organisation — from the boardroom to the workflow layer.

Explore this service