AI Centre of Excellence: Should You Build One — and What Should It Do?
As enterprise AI programmes mature, a recurring question emerges: who owns AI across the organisation? The AI Centre of Excellence is the most common answer — and one of the most misunderstood and misimplemented structures in enterprise AI. Here is how to get it right.
Key Takeaways
- An AI CoE is a standards and oversight function — not an AI implementation team. When it becomes the latter, it turns into a bottleneck that slows AI adoption.
- Three CoE models: Centralised (high consistency, creates bottlenecks), Federated (high speed, inconsistent governance), Hybrid/Hub-and-Spoke (most mature programmes converge here).
- Six functions the CoE must own: AI strategy alignment, governance standards, risk oversight, operating model standards, talent development, and knowledge management.
- Build the CoE when: multiple AI deployments are in production, governance questions emerge that individual business units cannot resolve, and absence of coordination is producing visible duplication.
- Building a CoE too early produces a team with no mandate. Building one too late means the first task is remediating governance debt — not establishing governance.
As enterprise AI programmes mature from isolated experiments to strategic capability, a recurring organisational question emerges: who owns AI across the organisation? Who is accountable for AI strategy, for the quality of AI governance, and for the cross-business-unit decisions that affect the entire AI portfolio?
For many organisations, the answer is a Centre of Excellence. The AI CoE is a dedicated function that provides centralised expertise, standards, and oversight for AI investment and deployment across the enterprise. It is also one of the most misunderstood and misimplemented structures in enterprise AI.
What an AI CoE Is — and What It Is Not
An AI CoE is not an AI implementation team. It does not build AI systems for business units. When it does, it becomes a bottleneck — a centralised resource that every business unit competes for, and a dependency that slows AI adoption rather than enabling it.
An AI CoE is a standards and oversight function. Its value is in the things that every AI deployment needs but no individual business unit should build from scratch: governance frameworks, risk assessment approaches, operating model standards, talent development, and the institutional knowledge of what AI investments have worked, what has failed, and why.
The Three CoE Models
Framework Reference
AI Centre of Excellence — Three Structural Models
The right model depends on current AI maturity and governance priority
Centralised
Single team owns AI governance and standards enterprise-wide
Advantages
Trade-offs
Best for: Early-stage programmes or highly regulated organisations
Federated
AI capability distributed across business units; thin central standards function
Advantages
Trade-offs
Best for: Organisations where AI adoption outpaced governance
Hybrid (Hub & Spoke)
Central hub owns governance and strategy; spokes own implementation within hub standards
Advantages
Trade-offs
Best for: Most mature enterprise AI programmes converge here
Six functions the CoE must own (regardless of model)
The CoE must not own: delivery of AI projects, vendor management, or operational support — those belong with business units and technology functions.
Centralised CoE — A single team owns AI governance, standards, and oversight for the entire organisation. Business units operate under direct CoE governance. This model produces high consistency and strong governance, but creates a central bottleneck that limits the speed and autonomy of AI adoption at scale. Appropriate for early-stage programmes or organisations with significant regulatory risk requiring consistent governance.
Federated CoE — AI capability is distributed across business units, each maintaining its own AI team. A thin central function sets standards and facilitates knowledge sharing. This model produces high speed and business-unit alignment but often at the cost of governance consistency — business units interpret standards differently and reinvent shared capabilities independently. Common in organisations where AI adoption outpaced governance.
Hybrid (Hub and Spoke) — A central hub owns governance standards, enterprise AI strategy, and shared services. Business-unit spokes own implementation and domain-specific AI development, operating within the standards the hub defines. This model captures most of the benefits of both centralised and federated approaches. Most mature enterprise AI programmes converge on this model as their AI portfolio grows.
The Six Functions the CoE Must Own
Regardless of model, the CoE should own six functions that genuinely require centralisation:
- 1AI strategy alignment — ensuring AI investment is consistent with enterprise strategy and sequenced correctly
- 2Governance standards — defining risk classification, documentation, and oversight requirements for all AI deployments
- 3Risk oversight — maintaining a portfolio-level view of AI risk and escalating material issues to executive leadership
- 4Operating model standards — defining how human–AI workflows are designed and what accountability structures are required
- 5Talent and capability development — building AI literacy across the organisation and developing the specialist roles the programme needs
- 6Knowledge management — capturing and distributing lessons from AI deployments across the organisation
Functions the CoE should not own: delivery of individual AI projects, management of technology vendors, operational support of deployed AI systems. These belong with business units and technology functions — the CoE's role is to set the standards they operate to, not to do the work for them.
When to Build the CoE
The timing of CoE establishment matters. Building too early — before there is meaningful AI activity to govern — produces a team with no mandate and no work. Building too late — when uncoordinated AI deployments have already created governance debt — means the CoE's first task is remediation rather than governance.
The right time is when the organisation has multiple AI deployments in production, when governance questions are emerging that individual business units cannot resolve alone, and when absence of coordination is visibly producing duplicated effort or inconsistent standards. For most large enterprise organisations in 2026, that moment has arrived or is imminent.
Imagine Works designs AI operating models and Centre of Excellence structures for enterprise organisations. Get in touch to discuss your AI governance structure.
Related Service
AI Strategy & Operating Model
Designing the AI strategy, vision, and operating model that aligns your entire organisation — from the boardroom to the workflow layer.
Explore this serviceMore Insights
More on AI Strategy
Workforce Planning for AI: Roles, Reskilling, and the Human–AI Team
The question enterprise leaders most commonly ask about AI and workforce is: which jobs will be automated? This is the wrong question. The right question is: how does AI change the work — and what does that mean for the people doing it? Here is how to plan your workforce transition correctly.
AI Portfolio Management: How to Prioritise Which Use Cases to Fund
Most enterprise AI investment is made project by project. Each decision is rational in isolation and collectively they produce a fragmented, duplicated, ungoverned portfolio that does not compound into organisational capability. Here is how to manage AI investment as a portfolio instead.
How to Assess Your Organisation's AI Maturity
Most organisations do not know where they actually are on the AI maturity curve. Without an honest assessment, investment decisions are made in the wrong order and operating models are designed for an organisation that does not yet exist. Here is how to assess AI maturity accurately.