Enterprise Security 10 min read

Shadow AI: 67% of Employees Use AI Tools at Work, Only 18% of Companies Have AI Security Policies

Shadow AI is the new shadow IT — employees using Claude Code, ChatGPT, Copilot, and other AI tools without security oversight. The gap between adoption and governance is your biggest enterprise vulnerability.

RedTeam Partners

CREST-Certified Security Team · 2026-03-13

Shadow AI is the fastest-growing attack surface in enterprise security. According to Salesforce's 2026 Workforce AI Survey, 67% of employees now use AI tools at work — but only 18% of organisations have formal AI security policies. The gap between adoption and governance is staggering, and it's creating vulnerabilities that no firewall, EDR, or traditional security programme can address.

Shadow IT gave us unauthorised Dropbox accounts and personal Gmail forwarding rules. Shadow AI is exponentially more dangerous because AI tools don't just store data — they process, transform, and potentially expose it to third-party model providers.

The Scale of Shadow AI in 2026

MetricValueSource
Employees using AI tools at work67%Salesforce Workforce AI Survey 2026
Companies with formal AI policies18%Salesforce Workforce AI Survey 2026
Employees who've pasted confidential data into AI43%Cyberhaven Data Loss Report 2025
AI tools used per enterprise (average)14Productiv SaaS Intelligence Report 2026
Non-developer Claude Code users at major companies>50%Anthropic Usage Analytics 2026
Organisations that can identify all AI tools in use12%Gartner AI Governance Survey 2026

How Shadow AI Creates Enterprise Vulnerabilities

1. Data Leakage to AI Providers

When an employee pastes customer data, source code, financial reports, or strategic plans into ChatGPT, Claude, or Copilot, that data enters the provider's processing pipeline. While enterprise agreements may prevent training on this data, the exposure itself creates compliance risk under GDPR, HIPAA, and industry-specific regulations.

Real case: Samsung banned ChatGPT in 2023 after engineers leaked semiconductor designs. In 2025, a major pharmaceutical company discovered employees had uploaded clinical trial data to multiple AI tools — a potential FDA and EMA regulatory violation worth tens of millions in penalties.

2. Claude Code as Business Automation Platform

As we detailed in our AI coding tools attack surface analysis, Claude Code has evolved far beyond a programming assistant. Non-developer employees use it to:

  • Scrape LinkedIn for lead generation and competitive intelligence
  • Automate CRM workflows with direct database access
  • Process financial data including invoices, reports, and forecasts
  • Manage infrastructure scripts and deployment pipelines
  • Build internal tools that interact with production systems

Each of these use cases grants Claude Code access to sensitive systems and data. If a single user's environment is compromised — via malicious project configurations (CVE-2025-59536, CVSS 8.7) — the attacker inherits whatever access that employee had. API keys, database credentials, CRM access, financial systems — all accessible through a compromised AI tool configuration.

3. Unvetted AI Tool Supply Chain

Shadow AI tools often integrate via browser extensions, API keys, or OAuth connections that bypass security review. Each integration creates a supply chain dependency that the security team doesn't know about. According to Productiv's 2026 analysis, the average enterprise has 14 distinct AI tools in use — of which the IT team is aware of only 4-5.

4. Inconsistent Data Classification

Even when employees use sanctioned AI tools, they make real-time decisions about what data to share. Without clear policies, an employee's judgment about "is this confidential?" varies wildly. Marketing might consider customer demographics non-sensitive; legal sees it as PII governed by GDPR. The AI tool doesn't know the difference.

5. Model Output as Trusted Input

When employees use AI-generated analysis for business decisions without verification, they introduce a subtle but critical risk. AI hallucinations become "facts" in reports. AI-generated code with security vulnerabilities enters production. AI-drafted contracts contain errors that create legal liability. The trust placed in AI output is often higher than in equivalent human output — a cognitive bias that shadows the entire organisation.

The McKinsey Connection: What Shadow AI Enables

The McKinsey Lilli breach exposed what happens when an enterprise AI platform lacks proper security. But consider: McKinsey at least knew Lilli existed and had some controls around it. Shadow AI creates the same risk profile with zero visibility:

  • McKinsey had 95 system prompts — your shadow AI tools have prompts you've never reviewed
  • McKinsey had 22 unauthenticated endpoints — your shadow tools have connections you've never audited
  • McKinsey had 46.5M messages exposed — your shadow tools process data volumes you can't measure

Building an AI Governance Framework

Phase 1: Discovery (Weeks 1-2)

  1. AI tool inventory — Use network traffic analysis, SSO/OAuth logs, expense reports, and browser extension audits to identify all AI tools in use
  2. Data flow mapping — For each tool, document what data enters, where it's processed, and what outputs are generated
  3. User interviews — Talk to department heads about how their teams use AI. You'll discover 2-3x more tools than technical monitoring reveals

Phase 2: Policy (Weeks 3-4)

  1. AI acceptable use policy — Define which tools are sanctioned, what data can be shared, and which use cases are prohibited
  2. Data classification for AI — Create specific guidelines for what data classifications are permitted with each category of AI tool
  3. Incident response — Add AI-specific scenarios to your incident response plan (prompt injection, data leakage, model manipulation)

Phase 3: Technical Controls (Weeks 5-8)

  1. AI-aware DLP — Deploy data loss prevention rules that detect sensitive data being sent to AI APIs
  2. Centralised AI gateway — Route all AI API calls through a managed proxy that enforces policies, logs usage, and enables monitoring
  3. Secure configurations — For sanctioned tools, implement security hardening (SSO, API key management, data retention policies)

Phase 4: Testing and Monitoring (Ongoing)

  1. AI red teaming — Conduct regular adversarial testing of sanctioned AI systems
  2. Continuous monitoring — Monitor for new shadow AI tools, unusual data flows, and policy violations
  3. Compliance mapping — Ensure your governance framework satisfies EU AI Act requirements and other applicable regulations

The Regulatory Dimension

Shadow AI creates direct regulatory exposure:

  • GDPR — Data processing without documented legal basis (every time an employee sends PII to an AI tool without a DPA in place)
  • EU AI Act — Deploying AI systems without required risk assessment or documentation (Article 9 applies even if the deployment was "unofficial")
  • Industry regulations — HIPAA (healthcare), FINRA/FCA (financial services), ITAR (defence) — all have data handling requirements that shadow AI routinely violates

The EU AI Act's August 2, 2026 deadline makes governance urgent. Penalties reach up to €15 million or 3% of global revenue for high-risk obligations (up to €35 million or 7% for prohibited AI practices) — and "we didn't know our employees were using AI" is not a defence.

Self-Assessment: Shadow AI Risk Score

Score your organisation (1 point each):

  1. Do you have a complete inventory of all AI tools used across the organisation?
  2. Is there a formal AI acceptable use policy that all employees have acknowledged?
  3. Can you detect when sensitive data is sent to AI APIs?
  4. Are all AI tools covered by enterprise agreements with appropriate DPAs?
  5. Have you conducted AI-specific security testing of your sanctioned AI tools?
  6. Is AI governance included in your incident response plan?
  7. Do you monitor for new, unsanctioned AI tool adoption?
  8. Have employees received training on AI security and data handling?

Score 0-2: Critical risk — immediate action required
Score 3-5: Moderate risk — significant gaps to address
Score 6-8: Managed risk — continue strengthening

Start with our free 25-point AI security checklist for a comprehensive evaluation of your AI security posture.

References

Is Your AI Infrastructure Secure?

Book a free 30-minute AI security analysis with our CREST-certified team. We'll show you what an attacker could exploit in your AI systems.

Book Free Analysis