Skip to content

0x4D31/airt

Repository files navigation

AIRT — AI Red Team Academy

A free, open-source course covering offensive security testing of AI systems — from prompt injection to supply chain attacks. 60+ hours of content with hands-on Docker labs.

🌐 View the course →

Modules

# Module Topics
1 Foundations of AI Red Teaming MITRE ATLAS, OWASP LLM Top 10, threat modeling
2 Prompt Injection Attacks Direct/indirect injection, jailbreaks, filter bypasses
3 RAG Exploitation & Vector Database Attacks Knowledge base poisoning, embedding attacks
4 Multi-Agent System Exploitation Agent hijacking, tool abuse, memory poisoning
5 AI Supply Chain & Infrastructure Attacks Model backdoors, pickle exploits, dependency attacks
6 Model Extraction & Inference Attacks Model stealing, membership inference, side channels
7 Automated AI Red Teaming at Scale garak, PyRIT, promptfoo, CI/CD integration
8 Post-Exploitation & Impact Analysis Lateral movement, reporting, regulatory frameworks

Hands-on Labs

Each module includes a Docker-based lab environment. No cloud API keys needed — everything runs locally via Ollama.

Quick Start

# Clone the repo
git clone https://github.com/0x4d31/airt.git
cd airt/labs

# Start any lab (e.g., Lab 01)
cd lab01-foundations
docker compose up

# Access the lab interface
open http://localhost:8888

Prerequisites

  • Docker and Docker Compose
  • 8 GB+ RAM (16 GB recommended for Labs 07–08)
  • ~20 GB disk space for model downloads

License

Content is licensed under CC BY-SA 4.0. Code and lab files are licensed under MIT.


Built with Perplexity Computer.

About

AIRT — A free, open-source AI Red Teaming course with 8 modules and hands-on Docker labs. Built with Perplexity Computer.

Topics

Resources

Stars

Watchers

Forks

Contributors