Threat Research 10 min read

Claude Code, Copilot & the New Enterprise Attack Surface: Why AI Coding Tools Need Security Testing

Critical CVEs in Claude Code (CVSS 8.7), supply chain risks from AI coding tools, and why your development workflow is now a security boundary. What enterprises adopting AI development tools must know.

RedTeam Partners

CREST-Certified Security Team · 2026-03-13

AI "coding" tools have become AI "everything" tools. Claude Code, GitHub Copilot, and Cursor started as developer infrastructure — but in 2026, they've become business automation platforms. Teams use Claude Code to scrape LinkedIn, automate outreach, process financial data, manage CRM workflows, and build internal tools on the fly. The security implications are enormous, and critical vulnerabilities disclosed in February 2026 prove the threat is real.

The Claude Code Moment — Far Beyond Coding

Claude Code is experiencing what industry analysts call its "ChatGPT moment." According to a January 2026 developer survey by UC San Diego and Cornell University, Claude Code ranks alongside GitHub Copilot and Cursor as one of the three most widely adopted AI platforms among professionals.

But the adoption story goes far beyond writing code. At Epic, over half of Claude Code usage comes from non-developer roles — product managers, analysts, operations teams. Companies use Claude Code to:

  • Scrape and process data — LinkedIn profiles, competitor intelligence, market research
  • Automate business workflows — CRM updates, email campaigns, report generation
  • Build internal tools — Dashboards, data pipelines, API integrations
  • Access sensitive APIs — Payment processors, customer databases, cloud infrastructure

Microsoft — which sells the competing GitHub Copilot — has widely adopted Claude Code internally across major engineering teams. Anthropic describes it as "one of the most consequential developer tools of the past year."

When a tool has terminal access, reads your file system, executes commands, and connects to APIs — and is used by non-technical staff for sensitive business operations — the security implications are staggering.

Critical Vulnerabilities: CVE-2025-59536 and CVE-2026-21852

In February 2026, Check Point Research disclosed two significant vulnerabilities in Claude Code:

CVECVSS ScoreImpact
CVE-2025-595368.7 (High)Remote Code Execution via malicious project configurations
CVE-2026-218525.3 (Medium)API key exfiltration from malicious repositories

How CVE-2025-59536 Works

The vulnerability exploits Claude Code's configuration mechanisms — Hooks, Model Context Protocol (MCP) servers, and environment variables. When a developer clones and opens an untrusted repository, arbitrary shell commands execute automatically upon tool initialisation, without any user interaction beyond opening the project.

The attack vector is deceptively simple:

  1. Attacker creates a legitimate-looking repository with malicious Claude Code configuration
  2. Developer clones the repository
  3. Developer starts Claude Code in the project directory
  4. Malicious hooks execute arbitrary commands with the developer's full system privileges

The Supply Chain Amplification Effect

This isn't just a single-developer risk. As Check Point Research noted:

"A single compromised developer account or insider threat can inject malicious configuration into company codebases, affecting entire development teams, making this a supply chain attack."

In collaborative AI development environments, a single compromised API key becomes a gateway to broader enterprise exposure. When an entire engineering organisation shares Claude Code configurations through version-controlled repositories, one poisoned config file propagates to every developer who pulls the code.

The Broader AI Development Tool Threat Landscape

Claude Code is not an isolated case. The entire category of AI-powered development tools introduces attack surfaces that traditional security models haven't fully addressed:

GitHub Copilot: CVE-2025-53773

In 2025, GitHub Copilot suffered from CVE-2025-53773, allowing remote code execution through prompt injection — potentially compromising the machines of millions of developers who rely on Copilot for code generation.

Microsoft Copilot: EchoLeak

The EchoLeak vulnerability introduced a new attack class: zero-click prompt injection. An attacker could send an email containing hidden instructions that Microsoft Copilot ingested automatically, causing the AI to extract sensitive data from OneDrive, SharePoint, and Teams — all exfiltrated via trusted Microsoft domains that bypass network security controls.

OpenClaw: First AI Agent Security Crisis of 2026

OpenClaw, an open-source AI agent with over 135,000 GitHub stars, triggered the first major AI agent security crisis of 2026 with multiple critical vulnerabilities, malicious marketplace exploits, and over 21,000 exposed instances.

Why This Matters for Enterprises

1. AI Tools Have Unprecedented Access

Unlike traditional IDEs, AI coding tools read your entire codebase, access your file system, execute commands, and connect to external services. They operate with the developer's full privileges — a compromise means full access to source code, credentials, databases, and internal APIs.

2. The Adoption Curve Outpaces Security

AI coding tools are being adopted faster than security teams can evaluate them. When Microsoft's own engineering teams adopt a competitor's tool (Claude Code) despite selling their own (Copilot), the pressure to adopt is immense. Security review is treated as a bottleneck, not a safeguard.

3. Configuration-as-Code Becomes Configuration-as-Attack

Modern AI tools rely on project-level configuration files (`.claude/`, MCP configs, hooks) that live in version control. This means:

  • Every repository contributor can modify AI tool behaviour for the entire team
  • Code review processes rarely examine AI tool configurations
  • CI/CD pipelines may execute AI tool hooks automatically
  • Third-party dependencies can include malicious AI configurations

4. Non-Developer Adoption Widens the Blast Radius

When non-developer roles use Claude Code to scrape LinkedIn, automate outreach, or process financial data, they bring less security awareness to tool configuration and repository trust decisions. They store API keys in project configs. They clone repos from untrusted sources. They give Claude Code access to production databases because it's "faster than filing a ticket." The attack surface extends beyond engineering into every business function.

5. API Keys and Credentials in AI Workflows

Business teams using Claude Code routinely work with sensitive credentials — LinkedIn API keys, payment processor tokens, CRM secrets, cloud access keys. These credentials are often stored in project directories, environment files, or MCP server configurations that CVE-2026-21852 can exfiltrate. One compromised project directory can yield keys to an organisation's entire SaaS stack.

What Your Organisation Should Do Now

  1. Inventory all AI development tools — Know which tools are in use, by whom, and with what permissions
  2. Audit AI tool configurations in all repositories — Review `.claude/`, MCP server configs, and hook definitions
  3. Restrict AI tool configuration to trusted contributors — Not every developer needs to modify AI tool settings
  4. Include AI tool configs in code review — Changes to hooks, MCP servers, and environment variables should require approval
  5. Test AI tool integrations as part of security assessments — Our AI Security Configuration Review covers AI development tool risks in Step 2 (Authentication & Access Control) and Step 5 (Input Validation)
  6. Monitor API key usage — Detect anomalous patterns that indicate key exfiltration
  7. Establish an AI tool security policy — Define approved tools, configuration standards, and repository trust requirements

The Connection to the McKinsey Breach

The McKinsey Lilli breach and the Claude Code vulnerabilities share a common thread: AI systems are being deployed faster than they're being secured. McKinsey ran Lilli in production for two years without detecting 22 unauthenticated endpoints. Enterprises are adopting Claude Code across entire engineering organisations without auditing its configuration mechanisms.

The AI security gap isn't closing — it's widening. As OpenAI has acknowledged, prompt injection is "unlikely to ever be fully solved." The question isn't whether your AI systems have vulnerabilities. It's whether you find them before an attacker does.

References

  • Check Point Research (February 2026). Caught in the Hook: RCE and API Token Exfiltration Through Claude Code Project Files. CVE-2025-59536, CVE-2026-21852.
  • The Hacker News (February 2026). Claude Code Flaws Allow Remote Code Execution and API Key Exfiltration.
  • Dark Reading (2026). Flaws in Claude Code Put Developers' Machines at Risk.
  • VentureBeat (2026). Anthropic says Claude Code transformed programming.
  • UC San Diego & Cornell University (January 2026). Developer tool adoption survey.
  • Obsidian Security (2025). Prompt Injection Attacks: The Most Common AI Exploit.
  • TechCrunch (December 2025). OpenAI says AI browsers may always be vulnerable to prompt injection attacks.

Is Your AI Infrastructure Secure?

Book a free 30-minute AI security analysis with our CREST-certified team. We'll show you what an attacker could exploit in your AI systems.

Book Free Analysis