Master AI Security. Break AI Systems. Defend What Matters.
A free, open-source course covering offensive security testing of AI systems — from prompt injection to supply chain attacks. 60+ hours of content with hands-on Docker labs.
A comprehensive, hands-on curriculum covering offensive security testing of AI systems — LLMs, RAG pipelines, multi-agent systems, and AI infrastructure. Every module includes a Docker-based lab environment.
Each module includes detailed topics, a hands-on Docker lab, and curated references. Click any module to expand its full content.
Deploy a complete AI red teaming environment with local LLMs (Ollama), vector databases, and testing tools. Includes a vulnerable chatbot application as your first target.
Attack a series of increasingly hardened chatbots. Start with unprotected models, progress through guardrail-protected systems, and learn to systematically discover bypasses.
Build and then systematically compromise a RAG application. Poison its knowledge base, hijack retrieval, perform embedding inversion, and exfiltrate data through the LLM.
Attack a multi-agent customer service system where agents collaborate to handle requests. Compromise one agent to influence others, escalate privileges, and exfiltrate data through tool invocations.
Simulate supply chain attacks against an ML pipeline. Create a backdoored model, exploit pickle deserialization, demonstrate typosquatting, and poison training data to corrupt model behavior.
Extract a proprietary model's behavior through strategic API querying. Perform membership inference, attempt training data extraction, and analyze encrypted traffic for information leakage.
Build and run an automated AI red teaming pipeline using garak, PyRIT, and promptfoo. Test multiple models, generate comprehensive reports, and integrate security testing into a CI/CD workflow.
Conduct a complete AI red team engagement against a realistic AI-powered enterprise application. Perform reconnaissance, chain multiple exploits, demonstrate business impact, and deliver a professional report.
Three industry-leading tools used throughout the course for automated AI vulnerability discovery and red teaming.
LLM vulnerability scanner with 47+ probes across 12 categories. Automated detection of prompt injection, data leakage, toxicity, hallucination, and more.
Python Risk Identification Tool for generative AI. Multi-turn attack orchestration, converter chains for evasion, automated scoring, and comprehensive reporting.
Every lab runs locally via Docker. Clone the repository, pick a lab, and start hacking.
# Download and extract the labs curl -LO airt-labs.zip unzip airt-labs.zip -d airt-labs cd airt-labs # Start any lab (e.g., Lab 01 - Foundations) cd lab01-foundations docker-compose up # Access the lab interface open http://localhost:8888 # Run vulnerability scan with garak garak --model_type ollama --model_name llama3 --probes all # Launch PyRIT orchestrator python -m pyrit.orchestrator --config config.yaml
The AI Red Team Academy is a free, open-source educational resource designed to democratize AI security knowledge. We believe that understanding offensive techniques is essential for building robust AI defenses.
This course covers similar ground to commercial AI red teaming certifications — but is freely accessible to everyone. Whether you're a seasoned penetration tester, an AI researcher, or a security-curious developer, AIRT provides the hands-on experience you need.
Built for security professionals, researchers, and anyone passionate about AI safety. All labs run locally via Docker, requiring no cloud API keys or external services. Your testing environment stays completely under your control.
The curriculum spans 60–80 hours of content across 8 modules, from foundational concepts to full red team engagements. Each module includes both theory and a hands-on Docker lab with real attack simulations.