AI Portfolio Management: How to Prioritise Which Use Cases to Fund
Most enterprise AI investment is made project by project. Each decision is rational in isolation and collectively they produce a fragmented, duplicated, ungoverned portfolio that does not compound into organisational capability. Here is how to manage AI investment as a portfolio instead.
Key Takeaways
- Project-by-project AI investment produces fragmented portfolios: duplicated tools, missed shared infrastructure, and aggregate spend that does not compound into capability.
- The portfolio prioritisation matrix evaluates value potential against implementation feasibility across four quadrants: Quick Wins, Strategic Bets, Opportunistic, and Avoid.
- The hardest portfolio management failure is overweighting efficiency gains — the easiest AI value to quantify is the most overrepresented use case type and the least strategically distinctive.
- Portfolio management requires an operating model owner with cross-business-unit mandate — without it, the framework becomes an annual spreadsheet exercise.
- Use cases that succeed in pilots often require significant additional investment to operate at production scale — pilot success and production viability are not the same assessment.
Most enterprise AI investment is made project by project: a proposal arrives, a budget is allocated, a pilot runs. The decision is made in isolation — evaluated on its own merits, funded from a single business unit's budget, and governed (if at all) by the same mechanisms as an IT infrastructure project.
This is not portfolio management. It is reactive investment. And it produces outcomes that McKinsey, BCG, and Gartner consistently identify: a proliferation of isolated experiments, significant duplication, and aggregate investment that does not compound into organisational capability.
What AI Portfolio Management Is
AI portfolio management is the discipline of evaluating, prioritising, sequencing, and governing AI investments as a coordinated set — rather than as independent projects. The portfolio lens asks questions that project-by-project evaluation never surfaces:
- Are we funding the right mix of use cases given our current maturity?
- Are two business units building overlapping AI capabilities that should be shared infrastructure?
- Are we investing in use cases that require governance infrastructure we have not yet built?
- Is our AI spend building compound capability, or producing a collection of unconnected tools?
The Portfolio Prioritisation Matrix
Framework Reference
AI Portfolio Prioritisation Matrix
Value potential vs implementation feasibility
Strategic Bets
High value · Low feasibility
Invest to build feasibility. Sequence carefully — these require foundational work first.
Quick Wins
High value · High feasibility
Fund and accelerate. These deliver near-term value and build organisational confidence.
Avoid
Low value · Low feasibility
Do not fund. Limited return and significant effort required — not a priority at any stage.
Opportunistic
Low value · High feasibility
Fund only if capacity allows. Easy to do but limited strategic impact — do not prioritise.
Common failure: overweighting efficiency gains (easiest to model, least strategically distinctive). The use cases with highest strategic value are consistently the most deprioritised.
The standard analytical tool for AI portfolio prioritisation is a two-axis matrix evaluating value potential against implementation feasibility.
Value potential combines strategic importance (how central is this use case to commercial outcomes?), impact magnitude (what is the addressable improvement?), and sustainability (does this create durable advantage?).
Implementation feasibility combines data readiness (is the required data available and accessible?), technical complexity (how well-understood is the AI approach?), and organisational readiness (does the business unit have the capability to operate the system in production?).
This produces four quadrants: Quick Wins (high value, high feasibility — fund and accelerate); Strategic Bets (high value, low feasibility — invest to build feasibility, sequence carefully); Opportunistic (low value, high feasibility — fund only if capacity permits); and Avoid (low value, low feasibility — do not fund).
Common Portfolio Management Failures
Treating pilot success as production readiness. Pilots assess feasibility under controlled conditions. They do not assess production readiness. A use case that succeeds in a pilot may require significant additional investment in data pipelines, integration, change management, and governance before it operates reliably at scale. The portfolio assessment must distinguish between these two stages.
Ignoring shared infrastructure. Multiple high-value use cases often share a common data requirement, governance need, or architectural component. Portfolio management makes these shared dependencies visible and allows them to be funded as foundational investments rather than as line items in individual project budgets. Without portfolio visibility, every business unit rebuilds the same infrastructure independently.
Overweighting efficiency gains. Efficiency gains are the easiest AI value to quantify and the most overrepresented category in enterprise AI portfolios. Portfolios that are dominated by efficiency use cases typically underinvest in the use cases with the highest strategic value — those that create new revenue, improve customer relationships, or build genuine competitive capability. BCG's 2023 research found that the AI use cases with the highest long-term strategic value are consistently the ones most frequently deprioritised in favour of near-term efficiency gains.
The Role of the Operating Model
Portfolio management does not work without an operating model to execute it. Someone must own the portfolio — assessing proposals, making prioritisation decisions, tracking outcomes, and retiring underperforming investments. Without that ownership, the portfolio framework becomes an annual exercise in producing a spreadsheet that no one acts on.
The AI portfolio owner is typically a senior function within the AI operating model — often a Centre of Excellence or a senior AI strategy role — with a mandate to evaluate investments across business unit boundaries and the authority to make funding recommendations at the enterprise level.
Imagine Works builds AI investment frameworks and portfolio governance structures for enterprise organisations. Get in touch to discuss your AI investment strategy.
Related Service
AI Strategy & Operating Model
Designing the AI strategy, vision, and operating model that aligns your entire organisation — from the boardroom to the workflow layer.
Explore this serviceMore Insights
More on AI Strategy
AI Centre of Excellence: Should You Build One — and What Should It Do?
As enterprise AI programmes mature, a recurring question emerges: who owns AI across the organisation? The AI Centre of Excellence is the most common answer — and one of the most misunderstood and misimplemented structures in enterprise AI. Here is how to get it right.
Workforce Planning for AI: Roles, Reskilling, and the Human–AI Team
The question enterprise leaders most commonly ask about AI and workforce is: which jobs will be automated? This is the wrong question. The right question is: how does AI change the work — and what does that mean for the people doing it? Here is how to plan your workforce transition correctly.
How to Assess Your Organisation's AI Maturity
Most organisations do not know where they actually are on the AI maturity curve. Without an honest assessment, investment decisions are made in the wrong order and operating models are designed for an organisation that does not yet exist. Here is how to assess AI maturity accurately.