Extended from: fastapi-vs-litestar-vs-django-bolt-vs-django-ninja-benchmarks
A comprehensive benchmark comparing 5 Python web frameworks with realistic workloads, Docker resource constraints, and automatic resource monitoring.
Hardware: MacBook M2 Pro, 32GB RAM
Paginated Articles List (20 items with authors + tags):

Single Article Detail (with author, tags, and comments):

✨ Production-ready benchmark setup:
- 🐳 Docker containerization with resource limits (500MB RAM, 1 CPU per framework)
- ⏱️ Sequential execution - One framework at a time for fair, isolated testing
- 🐘 PostgreSQL database instead of SQLite
- 📊 Complex nested data models (Articles with Authors, Tags, Comments)
- 📈 Automatic resource monitoring (CPU & Memory usage tracked)
- 🔄 Real-world query optimization (tests select_related/prefetch_related patterns)
- 🎯 Realistic data volume (500 articles, 2000 comments, relationships)
- FastAPI - ASGI framework with Pydantic + SQLAlchemy
- Litestar - High-performance ASGI framework
- Django Ninja - Django + Pydantic API framework
- Django Bolt - Rust-powered Django API framework
- Django REST Framework - Traditional Django REST API framework
| Endpoint | Description | Purpose |
|---|---|---|
/json-1k |
~1KB JSON response | Tests JSON serialization (small) |
/json-10k |
~10KB JSON response | Tests JSON serialization (large) |
/db |
10 user reads | Tests simple database queries |
/articles?page=1&page_size=20 |
Paginated articles with author + tags | Tests complex queries with relationships |
/articles/{id} |
Single article with author, tags, comments | Tests nested eager loading |
Author (50 records)
├── name, email, bio
└── articles (1-to-many)
Tag (100 records)
├── name, slug
└── articles (many-to-many)
Article (500 records)
├── title, content, published
├── author (ForeignKey)
├── tags (ManyToMany)
└── comments (1-to-many)
Comment (2000 records)
├── author_name, content
└── article (ForeignKey)
BenchmarkUser (10 records)
└── username, email, first_name, last_name
- Python 3.12+
- uv - Package manager
- Docker - For containerized benchmarks (recommended)
- bombardier - HTTP benchmarking tool
Install bombardier:
go install github.com/codesenberg/bombardier@latest# 1. Setup (run once)
./setup.sh
# 2. Run benchmarks with Docker
./run_all.sh --dockerThis will:
- Start PostgreSQL container
- Run frameworks sequentially (one at a time):
- Start framework container
- Run all benchmarks for that framework
- Stop framework container
- Move to next framework
- Each framework limited to 500MB RAM, 1 CPU
- Run benchmarks with resource monitoring
- Generate results + graphs
- Cleanup containers when done
Why sequential? Running one framework at a time eliminates resource contention and gives the most accurate, fair comparison.
# 1. Setup (run once)
./setup.sh
# 2. Run benchmarks locally
./run_all.shThis runs frameworks directly on your machine (useful for development/debugging).
./setup.shThis script will:
- Install Python dependencies with
uv - Start PostgreSQL container
- Run Django migrations (creates all tables)
- Seed database with realistic test data using Faker
# Start all services and run benchmarks
./run_all.sh --docker
# Or manually:
docker compose up -d # Start all containers
uv run python bench.py --docker # Run benchmarks with resource monitoring
docker compose down # Stop containers# All-in-one script
./run_all.sh
# Or start servers manually in separate terminals:
./run_fastapi.sh # Port 8001
./run_litestar.sh # Port 8002
./run_ninja.sh # Port 8003
./run_bolt.sh # Port 8004
./run_drf.sh # Port 8005
# Then run benchmarks:
uv run python bench.py# Custom concurrency and duration
./run_all.sh --docker -c 200 -d 15 -r 5
# Benchmark specific frameworks only
uv run python bench.py --docker --frameworks fastapi litestar
# More options
uv run python bench.py --help-c, --connections Concurrent connections (default: 100)
-d, --duration Duration per endpoint in seconds (default: 10)
-w, --warmup Warmup requests (default: 1000)
-r, --runs Runs per endpoint, best result taken (default: 3)
-o, --output Output markdown file (default: BENCHMARK_RESULTS.md)
--frameworks Frameworks to benchmark (default: all)
--docker Monitor Docker containers instead of local processes
--sequential Run frameworks one at a time (automatically enabled for Docker mode)
Sequential Mode:
- Automatically enabled when using
./run_all.sh --docker - Starts one framework, benchmarks it, stops it, then moves to the next
- Eliminates resource contention between frameworks
- Provides the fairest, most accurate comparison
- Recommended for production benchmarks
The benchmark automatically tracks for each run:
- Peak Memory (MB) - Maximum memory usage during the test
- Average Memory (MB) - Mean memory consumption
- Peak CPU (%) - Maximum CPU percentage
- Average CPU (%) - Mean CPU usage
Results are displayed in:
- Console output (real-time)
- Markdown tables (
BENCHMARK_RESULTS.md) - Resource usage graphs (
graphs/benchmark_resources.png)
After running benchmarks, you'll get:
BENCHMARK_RESULTS.md # Detailed results table
graphs/
├── benchmark_combined.png # All endpoints comparison
├── benchmark_json_1k.png # Per-endpoint graphs
├── benchmark_json_10k.png
├── benchmark_db.png
├── benchmark_articles_*.png
└── benchmark_resources.png # CPU & Memory usage by framework
# Drop all tables, re-run migrations, and reseed
./scripts/reset_db.sh# Seed database with fresh data
uv run python scripts/seed_db.py# Via Docker
docker compose exec postgres psql -U benchmark -d benchmark
# Example queries
SELECT COUNT(*) FROM articles_article;
SELECT COUNT(*) FROM articles_comment;Each framework container is limited to:
- Memory: 500MB
- CPU: 1.0 (100% of 1 core)
These limits ensure:
- Fair comparison across frameworks
- Realistic production-like constraints
- Prevention of resource monopolization
- Ability to identify memory leaks
| Service | Port | URL Prefix |
|---|---|---|
| PostgreSQL | 5445 | - |
| FastAPI | 8001 | - |
| Litestar | 8002 | - |
| Django Ninja | 8003 | /ninja |
| Django Bolt | 8004 | - |
| Django DRF | 8005 | /drf |
Note: PostgreSQL runs on port 5445 on the host (mapped from container's internal port 5432) to avoid conflicts with local PostgreSQL installations.
.
├── bench.py # Benchmark runner with resource monitoring
├── docker-compose.yml # Container orchestration
├── Dockerfile # Single image for all frameworks
├── setup.sh # One-time setup script
├── run_all.sh # Run all benchmarks
├── config.env # Database configuration
├── fastapi_app.py # FastAPI implementation
├── litestar_app.py # Litestar implementation
├── django_project/
│ ├── api.py # Django Bolt implementation
│ ├── ninja_api.py # Django Ninja implementation
│ ├── drf_api.py # Django REST Framework implementation
│ ├── articles/ # Article models (Author, Tag, Article, Comment)
│ └── users/ # User models
├── shared/
│ ├── models.py # SQLAlchemy models (for FastAPI/Litestar)
│ └── schemas.py # Pydantic response schemas
└── scripts/
├── seed_db.py # Generate realistic test data
├── reset_db.sh # Reset and reseed database
└── wait_for_db.sh # PostgreSQL health check
Each framework demonstrates proper query optimization:
-
Django (Bolt, Ninja, DRF):
select_related()for ForeignKey relationshipsprefetch_related()for reverse FKs and ManyToMany
-
FastAPI/Litestar (SQLAlchemy):
selectinload()for eager loading relationships- Async database queries with
asyncpg
# Check container logs
docker compose logs
# Rebuild containers
docker compose build --no-cache# Ensure PostgreSQL is running
docker compose ps
# Check if port 5445 is available (mapped from container's 5432)
lsof -i :5445# Local mode: Ensure processes started correctly
ps aux | grep -E "uvicorn|litestar|runserver"
# Docker mode: Ensure --docker flag is used
uv run python bench.py --dockerFeel free to:
- Add more frameworks
- Suggest additional endpoints
- Improve optimization patterns
- Report issues or discrepancies
MIT
Note: Benchmark results are environment-specific. Your results may vary based on hardware, OS, and background processes. Use this as a comparative tool, not absolute performance metrics.
Inspired by python-api-frameworks-benchmark by tanrax.

