Back to Insights
AI Governance9 min read3 May 2026

Shadow AI: Why One-in-Five Enterprises Now Has a Governance Problem They Cannot See

Shadow AI — employees using unsanctioned generative tools at work — has moved from anecdote to material risk. IBM's 2025 breach data put a number on it: organisations with high shadow AI usage paid $670,000 more per breach, and only 37% of organisations have any policy to detect it. Here's what enterprise leaders should do about it.

AA

Agraj Agranayak

Founder & CEO, Imagine Works · About · LinkedIn

Key Takeaways

  • IBM's Cost of a Data Breach Report 2025 found that 20% of organisations — one in five — reported a breach involving shadow AI, defined as unsanctioned AI tools adopted by employees without IT or security oversight (IBM, July 2025).
  • Organisations with high levels of shadow AI usage paid an average of USD 670,000 more per breach than those with low or no shadow AI exposure, and 97% of AI-breached organisations lacked proper AI access controls (IBM, July 2025).
  • Only 37% of organisations have a policy to manage AI use or detect shadow AI, and 63% of breached organisations either have no AI governance policy or are still developing one (IBM, 2025).
  • Microsoft's 2025 Agentic Teaming & Trust research found that one in three employees has experimented with AI tools from outside their company in the past six months; UK research recorded 71% of employees using unapproved consumer AI tools at work, with 51% doing so weekly.
  • Search demand confirms the shift in leadership awareness — "shadow ai" averages 2,400 monthly US searches and 880 in India with an unusually high $31.85 US CPC (Google Ads data, May 2026), reflecting concentrated commercial buyer intent.

Shadow AI is the use of generative AI tools — chatbots, code assistants, image and video generators, agentic browser extensions — by employees at work without the knowledge, sanction, or oversight of the organisation's IT, security, or legal functions. It is the AI-era successor to shadow IT, and in the eighteen months since ChatGPT entered the enterprise vocabulary it has become one of the most material — and most under-managed — categories of operational risk.

For most organisations the diagnostic question is no longer whether shadow AI exists in the business. It does. The question is how much of it, where, and what is being put through it.

What the Data Now Says

For most of 2023 and 2024, "shadow AI" was an anecdote shared at conferences. In 2025 it became measurable.

IBM's Cost of a Data Breach Report 2025, published 30 July 2025 and based on 600 organisations studied independently by the Ponemon Institute, was one of the first major industry datasets to isolate shadow AI as a distinct risk variable. The findings reframed the conversation:

  • 20% of organisations — one in five — reported a breach involving shadow AI. These breaches disproportionately exposed customer personally identifiable information, with PII compromised in 65% of shadow-AI-related incidents.
  • Organisations with high levels of shadow AI usage paid an average of USD 670,000 more per breach than those with low or no shadow AI usage.
  • 97% of organisations that experienced an AI-related breach reported lacking proper AI access controls. The breach was rarely caused by a sophisticated AI exploit; it was almost always caused by AI being treated as outside the access-control perimeter.
  • Only 37% of organisations have a policy to manage AI use or detect shadow AI. A further 63% of breached organisations either lacked an AI governance policy entirely or were still developing one.
  • 16% of all studied breaches involved attackers using AI tools — most often for phishing or deepfake impersonation. Shadow AI is one half of the picture; the other half is adversaries who have already operationalised AI.

These are not predictions. They are the reported state of the enterprise as of mid-2025.

The behavioural side of the picture is consistent. Microsoft's 2025 Agentic Teaming & Trust Research Report found that one in three employees has experimented with AI tools from outside their company in the past six months. In a UK-specific Microsoft study, 71% of employees said they had used unapproved consumer AI tools at work, with 51% continuing to do so every week. The most cited reasons were that consumer AI tools were what employees were already used to in their personal life (41%) and that the employer did not provide a sanctioned alternative (28%).

Together the IBM and Microsoft datasets describe a single picture: shadow AI is not a fringe behaviour by a small minority of employees. It is the dominant pattern of enterprise AI use, and the governance perimeter has not yet caught up with it.

Why Shadow AI Is Different from Shadow IT

Shadow IT — employees using unsanctioned SaaS tools, personal cloud storage, or unapproved devices — has been on the enterprise security agenda for over a decade. Shadow AI looks superficially similar but differs in three ways that matter.

First, the data leaves the organisation as content, not as files. A shadow IT scenario typically involves a file moving from a sanctioned system to an unsanctioned one — a download, an upload, a copy. The volumetric and DLP signals are reasonably well understood. Shadow AI more often involves an employee pasting confidential text into a prompt, where the data leaves the perimeter as conversational content. Traditional DLP rules trained on file movement frequently miss it.

Second, the model itself becomes a memory surface. Where consumer AI tools train on user inputs by default, prompts that included confidential information may persist in the provider's training data and influence future model outputs. The organisation has no audit trail and no reliable way to retract the data.

Third, the productivity benefit is real and immediate. Most shadow IT existed because a sanctioned alternative was either missing or worse than the unsanctioned one. The same is true of shadow AI — but the productivity gap is larger and more visible. Employees experience a measurable speed-up the first time they use a capable AI assistant. Telling them to stop, without offering a sanctioned equivalent, sets up an unwinnable enforcement battle.

This third point is why shadow AI cannot be treated purely as a compliance problem. The behaviour exists because the unsanctioned tool is solving a real problem. A governance approach that only restricts, without also providing, predictably fails.

How Shadow AI Actually Shows Up

The shape of shadow AI varies by function. The patterns most commonly observed in 2025–2026 enterprise environments include:

  • Knowledge-work prompting. Employees pasting customer emails, internal strategy documents, draft contracts, financial figures, or anonymised-but-not-really records into consumer chatbots to summarise, redraft, or analyse them.
  • Code generation outside sanctioned environments. Developers using consumer code-assistant accounts on personal devices, or browser-based AI coders, to work on company source code — often to bypass slow or limited sanctioned tooling.
  • Browser-side agents. Browser extensions that read the contents of webpages, internal SaaS interfaces, and email — frequently with permissions that were granted in a single click and never reviewed.
  • Image, video, and presentation generation. Marketing, internal communications, and HR teams using generative tools to produce assets, often via personal accounts, with no record of which model produced what or whether commercial-use rights apply.
  • Personal AI subscriptions used for company work. The employee's personal ChatGPT Plus or equivalent subscription, used for work tasks, leaves no enterprise-side audit trail at all.

Each of these patterns produces a different data-exposure profile and a different governance response. Treating them as a single undifferentiated risk class is one of the most common errors in early shadow-AI policies.

The Three Failure Modes of Existing AI Policy

Most organisations that have written an AI policy in the last twelve months have hit at least one of three failure modes.

Policy without provision. The policy bans the use of consumer AI tools for company work but does not provide a sanctioned alternative — or provides one that is materially worse than the consumer tool. The result is a policy on paper and shadow AI in practice.

Provision without governance. The organisation rolls out an enterprise AI tool — Copilot, an internal chatbot, an agentic platform — without an access-control framework, prompt-logging policy, or data-classification rules. IBM's finding that 97% of AI-breached organisations lacked proper AI access controls maps directly onto this failure mode.

Governance without detection. The policy exists, the sanctioned tool exists, but there is no telemetry to detect when employees are using anything else. Detection requires either network-side controls (DNS, secure web gateway, CASB) tuned for AI domains, endpoint visibility, or both. Without it, the organisation's shadow AI exposure is unmeasured by definition.

A working shadow AI posture closes all three: a clear policy, a sanctioned alternative that is genuinely good enough, and detection that catches drift.

A Governance Framework for Shadow AI

A defensible shadow AI posture has six components. The order matters; the later components depend on the earlier ones being in place.

1. Inventory. Establish what AI tools — sanctioned and unsanctioned — are currently being used in the organisation, by which functions, for which use cases. This is best done with a combination of network telemetry (which AI service domains are being accessed), employee survey, and procurement-spend analysis (which AI subscriptions appear on personal expense reports).

2. Acceptable-use policy with data-classification rules. Move beyond a generic "don't paste confidential information into ChatGPT" line. Tie the policy to the organisation's data classification — which tiers of data may be used in which categories of AI tool, with concrete examples. The policy should be short and specific enough that an employee can apply it in the moment.

3. Sanctioned alternative provision. For each common shadow-AI use case identified in the inventory, ensure there is a sanctioned alternative that is at least competitive on quality and speed. The sanctioned tool does not need to match the consumer tool feature for feature; it needs to be good enough that employees will reach for it first.

4. Access controls and logging. For sanctioned tools, implement the access controls IBM's data shows are missing in 97% of AI-breached organisations: identity-based access, role-based scoping, audit-grade prompt and output logging, retention rules consistent with the organisation's broader data policy.

5. Detection. Use existing network and endpoint controls — DNS filtering, secure web gateways, CASB, EDR — to identify use of unsanctioned AI services, with the goal of measurement first and enforcement second. Most organisations are surprised by what the first inventory surface produces.

6. Continuous review. New AI tools enter the market every week. The acceptable-use policy, the inventory, and the sanctioned alternatives need to be reviewed on a defined cadence — quarterly at minimum — rather than treated as a one-time exercise.

These six components do not, individually, require new technology. Most large organisations already have the access-control, logging, and network-detection capability the framework calls for. What is usually missing is the AI-specific configuration of those capabilities and the policy layer that ties them together.

What Enterprise Leaders Should Be Asking This Quarter

Two questions, asked at the right level, surface most of what matters.

To the CIO and CISO: Of the AI services our employees are using today, how many are sanctioned, how many are unsanctioned, and how do we know? If the answer relies on assumption rather than telemetry, the inventory step has not been done.

To the General Counsel and Chief Risk Officer: If a customer-data exposure occurred today through a shadow AI tool, what is our incident-response process, our notification posture, and our regulatory exposure? If the answer is the same as the generic data-breach process, the AI-specific elements — prompt history, model retention, third-party data subprocessor disclosures — have not been mapped.

The organisations that will avoid being part of the next year's breach statistics are the ones that move on these two questions before they become incident-response questions.

The Underlying Point

Shadow AI is not a behaviour to eliminate. It is a signal that the organisation's sanctioned AI provision is not yet meeting employee demand, and that the governance perimeter has not yet absorbed AI as a first-class category. Treated that way — as a measurement and provision problem rather than purely an enforcement problem — it becomes tractable.

The data is now clear enough that "we will get to AI governance later" has become a quantifiable position. IBM's figure for organisations carrying high levels of shadow AI is USD 670,000 of additional breach cost per incident, on top of an industry-wide average that is itself in the millions. That is the price of treating shadow AI as a future problem in 2026.

Imagine Works helps enterprise organisations design AI governance frameworks that cover sanctioned and shadow AI together — policy, access controls, sanctioned-alternative provision, detection, and review cadence. Get in touch to discuss your shadow AI posture.

Related Service

AI Governance & Risk Design

Designing the governance framework and risk architecture that keeps your AI systems compliant, auditable, and board-ready — before regulation forces the issue.

Explore this service