Free Resource

25-Point AI Security Checklist

The same checklist our CREST-certified team uses to assess enterprise AI systems. Based on the OWASP Top 10 for LLMs, NIST AI RMF, and lessons from the McKinsey Lilli breach.

Covers the vulnerabilities that led to the McKinsey Lilli breach (46.5M messages exposed)

Includes AI coding tool security (Claude Code, Copilot, Cursor)

Maps to EU AI Act compliance requirements (deadline: August 2, 2026)

73% of AI deployments fail this checklist

According to OWASP, prompt injection vulnerabilities appear in over 73% of production AI systems assessed during security audits.

Get the Checklist

Free instant download — no spam, ever

We respect your privacy. Unsubscribe anytime.

What's Inside the Checklist

AI API Security (5 checks)

All AI API endpoints require authentication (no unauthenticated endpoints)

Role-based or attribute-based access control (RBAC/ABAC) enforced on every endpoint

Rate limiting and throttling active on all AI-facing APIs

API documentation is not publicly accessible

Input validation and output sanitisation applied to all AI endpoints

Prompt & Model Security (5 checks)

System prompts stored separately from user-accessible databases

System prompt integrity monitoring in place

Prompt injection defences tested (direct, indirect, multi-turn)

Output filtering prevents PII leakage and harmful content generation

Model behaviour monitoring detects anomalous responses

Data & Privacy (5 checks)

AI input/output data flows are documented and mapped

Data minimisation principles applied — AI only accesses data it needs

Prompt history and chat logs encrypted at rest and in transit

PII handling compliant with GDPR/PDPA/HIPAA as applicable

Third-party AI service data processing agreements reviewed

RAG & Training Pipeline (5 checks)

RAG document access controls enforce user-level permissions

Cross-tenant data isolation validated in multi-user RAG systems

Training data provenance verified — no tainted data sources

Vector store access restricted and monitored

External data ingestion validated for injection payloads

AI Tool & Compliance Governance (5 checks)

Inventory of all AI tools in use across the organisation (including Claude Code, Copilot, etc.)

AI tool configurations in repositories reviewed as part of code review process

API keys and credentials for AI services rotated regularly and monitored

EU AI Act risk classification completed for all AI systems

Adversarial testing (red teaming) scheduled before August 2, 2026 compliance deadline