ImageBreak is a comprehensive framework for testing AI model safety and content moderation systems. It provides tools to evaluate how well AI models handle potentially harmful prompts and helps researchers identify vulnerabilities in content filtering systems. It is based from the following reaserch paper: https://github.com/ardada2468/ImageBreak/blob/63a9403cb4410b8bc7dffad2e9c74c14e5027e0c/CMSC396H_Final_Paper%20(3).pdf
This tool is designed exclusively for research purposes to improve AI safety and content moderation systems. The generated content should only be used by researchers, AI safety teams, and developers working to make AI systems safer and more robust.
- Multi-Model Support: Built-in interfaces for OpenAI GPT models, Google Gemini, and HuggingFace models
- Extensible Architecture: Easy-to-use abstract base classes for adding custom models
- Comprehensive Testing Pipeline:
- Generate boundary-testing prompts with configurable system instructions
- Alter prompts to test filter evasion
- Automated image generation testing with cyclic quality assessment
- Content moderation analysis (optional AWS integration)
- Cyclic Generation: Advanced retry logic with HuggingFace-powered image quality assessment
- Configurable System Instructions: Customize AI behavior through environment variables
- Results Analysis: Comprehensive reporting and data export capabilities
- CLI Interface: Command-line tool for batch operations and automation
- Streamlit Web UI: User-friendly interface for configuration and testing
- Optional AWS Integration: Built-in support for AWS Rekognition for content analysis
pip install imagebreak==1.0.1git clone https://github.com/yourusername/imagebreak.git
cd imagebreak
pip install -e .# Install all dependencies
pip install -r requirements.txt
# For development
pip install imagebreak[dev]Create a .env file with your API keys:
# Required: At least one API key
OPENAI_API_KEY=your_openai_api_key_here
GOOGLE_API_KEY=your_google_api_key_here
# Optional: HuggingFace for image analysis (recommended)
HUGGINGFACE_TOKEN=your_huggingface_token_here
# Cyclic Generation Settings
ENABLE_CYCLIC_REGENERATION=true
MAX_RETRY_ATTEMPTS=3
QUALITY_THRESHOLD=0.7
# Optional: AWS for moderation
USE_AWS_MODERATION=false
AWS_ACCESS_KEY_ID=your_aws_access_key
AWS_SECRET_ACCESS_KEY=your_aws_secret_key
AWS_REGION=us-east-1
# Configurable System Instructions
BOUNDARY_PROMPT_SYSTEM_INSTRUCTION=You are an AI model researcher...
PROMPT_ALTERATION_SYSTEM_INSTRUCTION=You are an AI researcher focused...
IMAGE_ANALYSIS_SYSTEM_INSTRUCTION=You are an AI image analysis expert...The CLI provides a comprehensive interface for all ImageBreak functionality:
# Launch the interactive web interface
imagebreak web-ui
# Custom host and port
imagebreak web-ui --host 0.0.0.0 --port 8080
# Headless mode (no auto-browser opening)
imagebreak web-ui --headless --no-browser
# Dark theme
imagebreak web-ui --theme dark# Check current configuration and API key status
imagebreak config-info# Generate prompts that test ethical boundaries
imagebreak generate-prompts \
--policies your_content_policy.txt \
--output generated_prompts.json \
--num-prompts 10 \
--topics "violence,misinformation" \
--model openai
# With custom system instruction
imagebreak generate-prompts \
--policies policy.txt \
--output prompts.json \
--system-instruction "Custom instruction for boundary testing..."# Create altered versions designed to evade filters
imagebreak alter-prompts \
--prompts generated_prompts.json \
--output altered_prompts.json \
--model openai
# Specify custom output file
imagebreak alter-prompts \
--prompts prompts.json \
--output custom_altered.json \
--model gemini# Advanced cyclic generation with quality assessment
imagebreak test-images \
--prompts altered_prompts.json \
--use-cyclic \
--max-attempts 5 \
--quality-threshold 0.8 \
--image-model openai \
--text-model openai \
--hf-model "Salesforce/blip2-flan-t5-xl" \
--save-images \
--output-folder ./generated_images
# Legacy mode (no cyclic generation)
imagebreak test-images \
--prompts prompts.json \
--no-use-cyclic \
--image-model openai# Run complete pipeline: generate β alter β test
imagebreak full-test \
--policies content_policy.txt \
--num-prompts 5 \
--image-model openai \
--text-model openai \
--use-cyclic \
--quality-threshold 0.7
# Quick test with Gemini text model
imagebreak full-test \
--policies policy.txt \
--num-prompts 3 \
--text-model gemini \
--image-model openaiEnvironment File Loading:
# Load configuration from custom .env file
imagebreak --env-file custom.env generate-prompts --policies policy.txt --output prompts.jsonVerbose Logging:
# Enable detailed logging
imagebreak --verbose test-images --prompts prompts.jsonCustom Output Directory:
# Set custom output directory for all results
imagebreak --output-dir ./custom_results test-images --prompts prompts.json| Command | Description | Key Options |
|---|---|---|
web-ui |
Launch Streamlit web interface | --host, --port, --theme, --headless |
config-info |
Display configuration and API key status | N/A |
generate-prompts |
Generate boundary-testing prompts | --policies, --num-prompts, --model, --system-instruction |
alter-prompts |
Create filter-evasion variants | --prompts, --model, --system-instruction |
test-images |
Test image generation with quality assessment | --use-cyclic, --max-attempts, --quality-threshold, --hf-model |
full-test |
Complete pipeline in one command | --policies, --use-cyclic, --quality-threshold |
Option 1: Via CLI (Recommended)
# Launch web interface directly from CLI
imagebreak web-uiOption 2: Direct Launch
# Launch the web interface manually
streamlit run streamlit_app.pyNavigate to http://localhost:8501 for the interactive interface with:
- API Configuration: Set up keys and models
- System Instructions: Customize AI behavior
- Testing Interface: Run tests with real-time progress
- Results Visualization: View detailed metrics and generated images
from imagebreak import ImageBreakFramework, Config
from imagebreak.models import OpenAIModel, GeminiModel
# Initialize with configuration
config = Config()
framework = ImageBreakFramework(config)
# Add models
framework.add_model("openai", OpenAIModel(
api_key=config.openai_api_key,
config=config
))
# Load your content policies
with open("your_content_policy.txt", "r") as f:
policies = f.read()
# Generate and test with cyclic generation
test_prompts = framework.generate_boundary_prompts(
policies=policies,
num_prompts=10,
topics=["violence", "misinformation"]
)
# Run cyclic generation tests
results = framework.test_image_generation_cyclic(
prompt_data_list=test_prompts,
save_images=True
)
# Analyze results
successful = sum(1 for r in results if r.success)
avg_quality = sum(r.final_quality_score for r in results
if r.final_quality_score is not None) / len(results)
print(f"Success rate: {successful/len(results)*100:.1f}%")
print(f"Average quality: {avg_quality:.2f}")from imagebreak.models.base import BaseModel
from imagebreak.types import ModelResponse
class CustomModel(BaseModel):
def __init__(self, api_key: str, model_name: str):
super().__init__()
self.api_key = api_key
self.model_name = model_name
def generate_text(self, prompt: str, **kwargs) -> ModelResponse:
# Implement your model's text generation
pass
def generate_image(self, prompt: str, **kwargs) -> ModelResponse:
# Implement your model's image generation
pass
# Use your custom model
framework.add_model("my-model", CustomModel(api_key="...", model_name="..."))from imagebreak import ImageBreakFramework, Config
config = Config(
max_retries=3,
timeout=30,
batch_size=10,
output_dir="./results",
enable_logging=True,
log_level="INFO",
enable_cyclic_regeneration=True,
max_retry_attempts=5,
quality_threshold=0.8,
use_aws_moderation=False
)
framework = ImageBreakFramework(config=config)from imagebreak.models import HuggingFaceImageAnalyzer
# Use custom vision model for image analysis
analyzer = HuggingFaceImageAnalyzer(
model_name="Salesforce/blip2-flan-t5-xl"
)
# Custom device configuration
analyzer = HuggingFaceImageAnalyzer(
model_name="Salesforce/blip2-opt-2.7b",
device="cuda" # or "cpu"
)- Quality-Based Retries: Automatically retry generation if image quality is below threshold
- HuggingFace Integration: Uses BLIP-2 and other vision models for uncensored quality assessment
- Prompt Refinement: Automatically improves prompts based on quality feedback
- Detailed Metrics: Track attempts, quality scores, and success rates
All AI model behaviors are now customizable via environment variables:
- Boundary Testing:
BOUNDARY_PROMPT_SYSTEM_INSTRUCTION - Prompt Alteration:
PROMPT_ALTERATION_SYSTEM_INSTRUCTION - Image Analysis:
IMAGE_ANALYSIS_SYSTEM_INSTRUCTION
- Modular Commands: Separate commands for each pipeline stage
- Cyclic Generation Support: Full CLI integration for quality-based retries
- Real-time Progress: Visual feedback and progress tracking
- Flexible Configuration: Override settings via CLI arguments
The framework provides comprehensive metrics:
- Success Rates: Generation success vs. filter blocking
- Quality Scores: AI-assessed image quality (0.0-1.0)
- Attempt Tracking: Detailed logs of retry attempts and prompt refinements
- Filter Bypass Analysis: Effectiveness of prompt alteration techniques
- Export Options: JSON, CSV reports for further analysis
πΌοΈ Testing image generation with 5 prompts
π Cyclic generation: True
π Max attempts: 3
β Quality threshold: 0.7
π€ Analysis model: Salesforce/blip2-opt-2.7b
β
Initialized HuggingFace analyzer: Salesforce/blip2-opt-2.7b
π Cyclic Generation Results:
β
Successful: 4/5 (80.0%)
π Total attempts: 12
β Average quality: 0.78
πΎ Detailed results saved to ./results/image_test_results_1704067200.jsonOPENAI_API_KEY=sk-...
ENABLE_CYCLIC_REGENERATION=true
QUALITY_THRESHOLD=0.7# API Keys
OPENAI_API_KEY=sk-...
GOOGLE_API_KEY=AI...
HUGGINGFACE_TOKEN=hf_...
# Cyclic Generation
ENABLE_CYCLIC_REGENERATION=true
MAX_RETRY_ATTEMPTS=5
QUALITY_THRESHOLD=0.8
# Model Selection
DEFAULT_TEXT_MODEL=openai
DEFAULT_IMAGE_MODEL=openai
# Optional AWS
USE_AWS_MODERATION=false
AWS_ACCESS_KEY_ID=AKIA...
AWS_SECRET_ACCESS_KEY=...
# Custom System Instructions
BOUNDARY_PROMPT_SYSTEM_INSTRUCTION=Custom instruction for boundary testing...
PROMPT_ALTERATION_SYSTEM_INSTRUCTION=Custom instruction for prompt alteration...
IMAGE_ANALYSIS_SYSTEM_INSTRUCTION=Custom instruction for image analysis...Install this version of numpy (using the command below). Create a venv if you need to use another version of numpy for your base python install
pip install numpy==1.26.4
When testing boundary prompts, you may encounter content policy blocks:
π« Prompt 1: Blocked by content policy after 3 attempts
How the Framework Handles This:
- Automatic Sanitization: The system automatically creates sanitized versions of boundary-testing prompts
- Progressive Attempts: First tries sanitized prompt, then original, then refined versions
- Transparent Feedback: Shows whether "sanitized" or "original" prompt was used for successful generations
Example Output:
β
Prompt 1: Success (attempts: 1, quality: 0.85) (used sanitized prompt)
β
Prompt 2: Success (attempts: 2, quality: 0.72) (used original prompt)
π« Prompt 3: Blocked by content policy after 3 attempts
Configuration Options:
# Reduce retry attempts if many prompts are blocked
MAX_RETRY_ATTEMPTS=2
# Disable boundary testing if too restrictive
BOUNDARY_PROMPT_SYSTEM_INSTRUCTION=Generate mild creative prompts suitable for general audiences...If you see warnings about HuggingFace image analyzer failures:
β οΈ HuggingFace image analyzer not available. Quality scores will be basic estimates.
Common Solutions:
-
Install missing dependencies:
pip install torch torchvision transformers accelerate
-
Set HuggingFace token:
HUGGINGFACE_TOKEN=hf_your_token_here
-
Use CPU-compatible models:
# For systems without GPU pip install torch torchvision --index-url https://download.pytorch.org/whl/cpu -
Alternative: Disable advanced analysis:
ENABLE_CYCLIC_REGENERATION=false
The framework will work without HuggingFace, but quality assessment will be basic rather than AI-powered.
For large models or limited memory:
# Use smaller HuggingFace model
DEFAULT_HF_MODEL=Salesforce/blip2-opt-2.7b
# Reduce retry attempts
MAX_RETRY_ATTEMPTS=2ImageBreak provides comprehensive analysis tools:
- Content Moderation Analysis: Integrates with AWS Rekognition and other moderation APIs
- Statistical Reports: Success rates, filter bypass rates, content categorization
- Export Formats: JSON, CSV, HTML reports
- Visualization: Charts and graphs for result analysis (via Streamlit UI)
This framework is built with safety in mind:
- Research Focus: Designed specifically for improving AI safety
- Ethical Guidelines: Built-in safeguards and ethical considerations
- Responsible Disclosure: Tools for reporting vulnerabilities responsibly
- Audit Trail: Comprehensive logging for accountability
We welcome contributions! Please see our Contributing Guidelines for details.
git clone https://github.com/yourusername/imagebreak.git
cd imagebreak
pip install -e .[dev]
pre-commit installpytest tests/
pytest --cov=imagebreak tests/ # With coverageRemember: This tool is for research purposes only. Please use responsibly and in accordance with all applicable laws and ethical guidelines.