Skip to content

Admin12121/Hackathon-S38

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

14 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

AI Security Command Center (ASCC)

A comprehensive security testing platform for Large Language Models (LLMs), featuring real-time vulnerability scanning, prompt injection detection, and automated red-teaming using Garak.

Dashboard Preview

🚀 Features

  • Real-time Vulnerability Sanning: Automated attacks using Garak to find security flaws.
  • LLM Guard Rails: Input/Output filtering for PII, Toxicity, and Prompt Injections.
  • Interactive Dashboard: Visual analytics of security posture and threat events.
  • Multiple Providers: Support for OpenAI, HuggingFace, and Local LLMs (Ollama).

🛠️ Tech Stack

  • Backend: Python, Django REST Framework, Garak, LLM-Guard
  • Frontend: TypeScript, Next.js, Tailwind CSS
  • Database: SQLite (Dev) / PostgreSQL (Prod)

⚡ Quick Start

1. Backend Setup

# Navigate to server directory
cd server

# Install uv if you haven't already
uv run manage.py runserver


# Run migrations
uv run manage.py migrate

# Create admin user
uv run manage.py createsuperuser

# Start the server
uv run manage.py runserver

Backend runs at http://localhost:8000

2. Frontend Setup

# Navigate to frontend directory
cd tricode

# Install dependencies
npm install

# Start development server
npm run dev

Frontend runs at http://localhost:3000

📖 Usage

  1. Login: Access the dashboard and log in with your credentials.
  2. Configure LLM: Go to Settings and add your API keys (OpenAI / HuggingFace).
  3. Run Scan: Navigate to the Test Suite, select a model, and click Start Scan.
  4. View Results: Wait for the background garak process to finish and view the detailed vulnerability report.

📦 Deployment

Production Build via Docker

# Build and run with Docker Compose
docker-compose up --build -d

Manual Deployment

Frontend:

cd tricode
npm run build
npm start

Backend:

cd server
gunicorn core.wsgi:application --bind 0.0.0.0:8000

🔒 Security

This project deals with active security testing. Use responsible AI practices.

  • Only scan models you have permission to test.
  • Be aware of API costs associated with extensive probing.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •