A comprehensive security testing platform for Large Language Models (LLMs), featuring real-time vulnerability scanning, prompt injection detection, and automated red-teaming using Garak.
- Real-time Vulnerability Sanning: Automated attacks using Garak to find security flaws.
- LLM Guard Rails: Input/Output filtering for PII, Toxicity, and Prompt Injections.
- Interactive Dashboard: Visual analytics of security posture and threat events.
- Multiple Providers: Support for OpenAI, HuggingFace, and Local LLMs (Ollama).
- Backend: Python, Django REST Framework, Garak, LLM-Guard
- Frontend: TypeScript, Next.js, Tailwind CSS
- Database: SQLite (Dev) / PostgreSQL (Prod)
# Navigate to server directory
cd server
# Install uv if you haven't already
uv run manage.py runserver
# Run migrations
uv run manage.py migrate
# Create admin user
uv run manage.py createsuperuser
# Start the server
uv run manage.py runserverBackend runs at http://localhost:8000
# Navigate to frontend directory
cd tricode
# Install dependencies
npm install
# Start development server
npm run devFrontend runs at http://localhost:3000
- Login: Access the dashboard and log in with your credentials.
- Configure LLM: Go to Settings and add your API keys (OpenAI / HuggingFace).
- Run Scan: Navigate to the Test Suite, select a model, and click Start Scan.
- View Results: Wait for the background
garakprocess to finish and view the detailed vulnerability report.
# Build and run with Docker Compose
docker-compose up --build -dFrontend:
cd tricode
npm run build
npm startBackend:
cd server
gunicorn core.wsgi:application --bind 0.0.0.0:8000This project deals with active security testing. Use responsible AI practices.
- Only scan models you have permission to test.
- Be aware of API costs associated with extensive probing.
