Skip to content

huynguyengl99/python-api-frameworks-benchmark

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Python API Framework Benchmark

Extended from: fastapi-vs-litestar-vs-django-bolt-vs-django-ninja-benchmarks

A comprehensive benchmark comparing 5 Python web frameworks with realistic workloads, Docker resource constraints, and automatic resource monitoring.

Hardware: MacBook M2 Pro, 32GB RAM

Framework Benchmark - All Endpoints

Resource Usage

Resource Usage - Memory and CPU

Database Performance (Real-World Workload)

Paginated Articles List (20 items with authors + tags): Paginated Articles Benchmark

Single Article Detail (with author, tags, and comments): Single Article Benchmark

What's New

Production-ready benchmark setup:

  • 🐳 Docker containerization with resource limits (500MB RAM, 1 CPU per framework)
  • ⏱️ Sequential execution - One framework at a time for fair, isolated testing
  • 🐘 PostgreSQL database instead of SQLite
  • 📊 Complex nested data models (Articles with Authors, Tags, Comments)
  • 📈 Automatic resource monitoring (CPU & Memory usage tracked)
  • 🔄 Real-world query optimization (tests select_related/prefetch_related patterns)
  • 🎯 Realistic data volume (500 articles, 2000 comments, relationships)

Frameworks Compared

Benchmark Endpoints

Endpoint Description Purpose
/json-1k ~1KB JSON response Tests JSON serialization (small)
/json-10k ~10KB JSON response Tests JSON serialization (large)
/db 10 user reads Tests simple database queries
/articles?page=1&page_size=20 Paginated articles with author + tags Tests complex queries with relationships
/articles/{id} Single article with author, tags, comments Tests nested eager loading

Data Model

Author (50 records)
  ├── name, email, bio
  └── articles (1-to-many)

Tag (100 records)
  ├── name, slug
  └── articles (many-to-many)

Article (500 records)
  ├── title, content, published
  ├── author (ForeignKey)
  ├── tags (ManyToMany)
  └── comments (1-to-many)

Comment (2000 records)
  ├── author_name, content
  └── article (ForeignKey)

BenchmarkUser (10 records)
  └── username, email, first_name, last_name

Requirements

  • Python 3.12+
  • uv - Package manager
  • Docker - For containerized benchmarks (recommended)
  • bombardier - HTTP benchmarking tool

Install bombardier:

go install github.com/codesenberg/bombardier@latest

Quick Start

Option 1: Docker Mode (Recommended)

# 1. Setup (run once)
./setup.sh

# 2. Run benchmarks with Docker
./run_all.sh --docker

This will:

  • Start PostgreSQL container
  • Run frameworks sequentially (one at a time):
    • Start framework container
    • Run all benchmarks for that framework
    • Stop framework container
    • Move to next framework
  • Each framework limited to 500MB RAM, 1 CPU
  • Run benchmarks with resource monitoring
  • Generate results + graphs
  • Cleanup containers when done

Why sequential? Running one framework at a time eliminates resource contention and gives the most accurate, fair comparison.

Option 2: Local Development Mode

# 1. Setup (run once)
./setup.sh

# 2. Run benchmarks locally
./run_all.sh

This runs frameworks directly on your machine (useful for development/debugging).

Detailed Setup Instructions

1. Initial Setup

./setup.sh

This script will:

  1. Install Python dependencies with uv
  2. Start PostgreSQL container
  3. Run Django migrations (creates all tables)
  4. Seed database with realistic test data using Faker

2. Running Benchmarks

Docker Mode (Fair Comparison)

# Start all services and run benchmarks
./run_all.sh --docker

# Or manually:
docker compose up -d              # Start all containers
uv run python bench.py --docker   # Run benchmarks with resource monitoring
docker compose down               # Stop containers

Local Mode (Development)

# All-in-one script
./run_all.sh

# Or start servers manually in separate terminals:
./run_fastapi.sh   # Port 8001
./run_litestar.sh  # Port 8002
./run_ninja.sh     # Port 8003
./run_bolt.sh      # Port 8004
./run_drf.sh       # Port 8005

# Then run benchmarks:
uv run python bench.py

3. Customizing Benchmarks

# Custom concurrency and duration
./run_all.sh --docker -c 200 -d 15 -r 5

# Benchmark specific frameworks only
uv run python bench.py --docker --frameworks fastapi litestar

# More options
uv run python bench.py --help

Benchmark Options

-c, --connections  Concurrent connections (default: 100)
-d, --duration     Duration per endpoint in seconds (default: 10)
-w, --warmup       Warmup requests (default: 1000)
-r, --runs         Runs per endpoint, best result taken (default: 3)
-o, --output       Output markdown file (default: BENCHMARK_RESULTS.md)
--frameworks       Frameworks to benchmark (default: all)
--docker           Monitor Docker containers instead of local processes
--sequential       Run frameworks one at a time (automatically enabled for Docker mode)

Sequential Mode:

  • Automatically enabled when using ./run_all.sh --docker
  • Starts one framework, benchmarks it, stops it, then moves to the next
  • Eliminates resource contention between frameworks
  • Provides the fairest, most accurate comparison
  • Recommended for production benchmarks

Resource Monitoring

The benchmark automatically tracks for each run:

  • Peak Memory (MB) - Maximum memory usage during the test
  • Average Memory (MB) - Mean memory consumption
  • Peak CPU (%) - Maximum CPU percentage
  • Average CPU (%) - Mean CPU usage

Results are displayed in:

  • Console output (real-time)
  • Markdown tables (BENCHMARK_RESULTS.md)
  • Resource usage graphs (graphs/benchmark_resources.png)

Output Files

After running benchmarks, you'll get:

BENCHMARK_RESULTS.md           # Detailed results table
graphs/
  ├── benchmark_combined.png   # All endpoints comparison
  ├── benchmark_json_1k.png    # Per-endpoint graphs
  ├── benchmark_json_10k.png
  ├── benchmark_db.png
  ├── benchmark_articles_*.png
  └── benchmark_resources.png  # CPU & Memory usage by framework

Database Management

Reset Database

# Drop all tables, re-run migrations, and reseed
./scripts/reset_db.sh

Manual Seeding

# Seed database with fresh data
uv run python scripts/seed_db.py

Access PostgreSQL

# Via Docker
docker compose exec postgres psql -U benchmark -d benchmark

# Example queries
SELECT COUNT(*) FROM articles_article;
SELECT COUNT(*) FROM articles_comment;

Docker Resource Constraints

Each framework container is limited to:

  • Memory: 500MB
  • CPU: 1.0 (100% of 1 core)

These limits ensure:

  • Fair comparison across frameworks
  • Realistic production-like constraints
  • Prevention of resource monopolization
  • Ability to identify memory leaks

Server Ports

Service Port URL Prefix
PostgreSQL 5445 -
FastAPI 8001 -
Litestar 8002 -
Django Ninja 8003 /ninja
Django Bolt 8004 -
Django DRF 8005 /drf

Note: PostgreSQL runs on port 5445 on the host (mapped from container's internal port 5432) to avoid conflicts with local PostgreSQL installations.

Project Structure

.
├── bench.py                    # Benchmark runner with resource monitoring
├── docker-compose.yml          # Container orchestration
├── Dockerfile                  # Single image for all frameworks
├── setup.sh                    # One-time setup script
├── run_all.sh                  # Run all benchmarks
├── config.env                  # Database configuration
├── fastapi_app.py              # FastAPI implementation
├── litestar_app.py             # Litestar implementation
├── django_project/
│   ├── api.py                  # Django Bolt implementation
│   ├── ninja_api.py            # Django Ninja implementation
│   ├── drf_api.py              # Django REST Framework implementation
│   ├── articles/               # Article models (Author, Tag, Article, Comment)
│   └── users/                  # User models
├── shared/
│   ├── models.py               # SQLAlchemy models (for FastAPI/Litestar)
│   └── schemas.py              # Pydantic response schemas
└── scripts/
    ├── seed_db.py              # Generate realistic test data
    ├── reset_db.sh             # Reset and reseed database
    └── wait_for_db.sh          # PostgreSQL health check

Optimization Techniques Tested

Each framework demonstrates proper query optimization:

  • Django (Bolt, Ninja, DRF):

    • select_related() for ForeignKey relationships
    • prefetch_related() for reverse FKs and ManyToMany
  • FastAPI/Litestar (SQLAlchemy):

    • selectinload() for eager loading relationships
    • Async database queries with asyncpg

Troubleshooting

Docker containers won't start

# Check container logs
docker compose logs

# Rebuild containers
docker compose build --no-cache

PostgreSQL connection errors

# Ensure PostgreSQL is running
docker compose ps

# Check if port 5445 is available (mapped from container's 5432)
lsof -i :5445

Resource monitoring shows 0 values

# Local mode: Ensure processes started correctly
ps aux | grep -E "uvicorn|litestar|runserver"

# Docker mode: Ensure --docker flag is used
uv run python bench.py --docker

Contributing

Feel free to:

  • Add more frameworks
  • Suggest additional endpoints
  • Improve optimization patterns
  • Report issues or discrepancies

License

MIT


Note: Benchmark results are environment-specific. Your results may vary based on hardware, OS, and background processes. Use this as a comparative tool, not absolute performance metrics.

Acknowledgments

Inspired by python-api-frameworks-benchmark by tanrax.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors