A fast, Pythonic HTTP client built with PyO3 and Rust reqwest
RequestX is a high-performance Python HTTP client library designed to be API-compatible with httpx. Built with PyO3 and the Rust reqwest library, it provides:
- 2.5x faster than the popular
requestslibrary - Connection pooling for high-throughput scenarios
- Full httpx API compatibility for easy migration
- Minimal memory footprint (~0.1 MB)
import requestx as rx
# Simple GET request
response = rx.get("https://api.github.com/users/octocat")
print(response.status_code)
print(response.json())
# With Client (recommended for multiple requests)
with rx.Client() as client:
response = client.get("https://httpbin.org/get")
data = response.json()
print(data["url"])pip install requestxOr from source:
git clone https://github.com/yourusername/requestx.git
cd requestx
pip install maturin
maturin developRequestX supports the following environment variables for configuration. These are automatically detected when trust_env=True (the default).
| Variable | Description | Example |
|---|---|---|
HTTP_PROXY |
Proxy URL for HTTP requests | http://proxy.example.com:8080 |
http_proxy |
Lowercase variant (also supported) | http://proxy.example.com:8080 |
HTTPS_PROXY |
Proxy URL for HTTPS requests | https://proxy.example.com:8080 |
https_proxy |
Lowercase variant (also supported) | https://proxy.example.com:8080 |
ALL_PROXY |
Proxy URL for all requests (fallback) | http://proxy.example.com:8080 |
all_proxy |
Lowercase variant (also supported) | http://proxy.example.com:8080 |
NO_PROXY |
Comma-separated hosts to bypass proxy | localhost,127.0.0.1,.local |
no_proxy |
Lowercase variant (also supported) | localhost,127.0.0.1,.local |
Example:
export HTTP_PROXY=http://proxy.corporate.com:8080
export HTTPS_PROXY=https://proxy.corporate.com:8080
export NO_PROXY=localhost,127.0.0.1,.internal
python my_script.py # Will use proxy for external requests| Variable | Description | Example |
|---|---|---|
SSL_CERT_FILE |
Path to custom CA certificate bundle | /path/to/ca-bundle.crt |
SSL_CERT_DIR |
Directory containing CA certificates | /path/to/ca-certs/ |
SSL_CERT_PASSWORD_FILE |
File containing SSL certificate password | /path/to/cert-password.txt |
Example:
export SSL_CERT_FILE=/etc/ssl/certs/ca-certificates.crt
export SSL_CERT_DIR=/etc/ssl/certs/| Variable | Description | Example |
|---|---|---|
HTTP_AUTHORIZATION |
Default Authorization header value | Bearer my-token |
| Variable | Description | Default | Example |
|---|---|---|---|
HTTPBIN_HOST |
Override default httpbin host | httpbin.org |
http://localhost:8080 |
HTTPBIN_PORT |
Override default httpbin port | 443 (or 80) |
8080 |
HTTPBIN_USE_EXTERNAL |
Use external httpbin.org instead of local | false |
true |
Example:
export HTTPBIN_HOST=http://localhost
export HTTPBIN_PORT=8080import requestx as rx
# Environment variables are automatically used when trust_env=True (default)
client = rx.Client(trust_env=True)
# This will use HTTP_PROXY, HTTPS_PROXY if set
response = client.get("https://api.github.com/users/octocat")import requestx as rx
# Create a client with custom settings
client = rx.Client(
base_url="https://api.example.com", # Base URL for relative requests
auth=("username", "password"), # BasicAuth tuple or rx.BasicAuth
params={"key": "value"}, # Default query parameters
headers={"X-Custom": "header"}, # Default headers
timeout=rx.Timeout(connect=5.0, read=30.0), # Timeout configuration
verify=True, # SSL verification
proxy="http://proxy:8080", # Proxy URL
limits=rx.Limits(max_connections=100), # Connection pool limits
trust_env=True, # Use environment variables
)
# Context manager support
with rx.Client() as client:
response = client.get("/users")
print(response.json())All HTTP methods are supported:
# GET
response = client.get(url, params={"key": "value"}, headers={"X-Header": "value"})
# POST
response = client.post(url, json={"key": "value"})
response = client.post(url, data={"key": "value"}) # Form data
response = client.post(url, content=b"raw bytes") # Raw body
# PUT, PATCH, DELETE, OPTIONS, HEAD
response = client.put(url, json={"update": "data"})
response = client.patch(url, data={"partial": "update"})
response = client.delete(url)
response = client.options(url)
response = client.head(url)
# Generic request method
response = client.request("METHOD", url, **kwargs)response = client.get("https://httpbin.org/get")
# Status
print(response.status_code) # 200
print(response.is_success) # True
print(response.is_redirect) # False
print(response.is_error) # False
# Content
print(response.content) # b'{"...": "..."}'
print(response.text) # '{"...": "..."}'
data = response.json() # Parsed JSON
# Headers
print(response.headers) # [("Content-Type", "application/json"), ...]
print(dict(response.headers)) # {"content-type": "application/json"}
# Timing
print(response.elapsed) # 0.123 seconds
# Other
print(response.url) # "https://httpbin.org/get"
response.raise_for_status() # Raises on 4xx/5xxtimeout = rx.Timeout(
connect=5.0, # Connection timeout
read=30.0, # Read timeout
write=15.0, # Write timeout
pool=10.0, # Pool timeout
total=60.0, # Total timeout (overrides others if set)
)
# Or shorthand
timeout = rx.Timeout(timeout=30.0) # Total timeoutlimits = rx.Limits(
max_connections=100, # Max concurrent connections
max_keepalive_connections=20, # Max idle connections
keepalive_expiry=30.0, # Idle connection timeout
)The Config class provides comprehensive performance tuning options for high-throughput scenarios. It includes connection pool settings, HTTP/2 configuration (for future async support), and Tokio runtime settings.
config = rx.Config(
# Connection Pool Settings
pool_idle_timeout_secs=300, # How long (seconds) to keep idle connections alive (default: 300)
pool_max_idle_per_host=1024, # Max idle connections per host (default: 1024)
# HTTP/2 Settings (requires async client - planned for future release)
http2_keep_alive_interval_secs=30, # Interval between HTTP/2 ping frames (default: 30)
http2_keep_alive_timeout_secs=10, # Timeout waiting for ping response (default: 10)
http2_initial_stream_window_size=1048576, # Flow control window per stream in bytes (default: 1048576)
http2_initial_connection_window_size=16777216, # Flow control window per connection in bytes (default: 16777216)
# Runtime Settings (Tokio) - for future async support
worker_threads=8, # Number of worker threads (default: 8)
max_blocking_threads=512, # Max threads for blocking operations (default: 512)
thread_name="requestx-worker", # Prefix for thread names (default: "requestx-worker")
thread_stack_size=1048576, # Stack size per thread in bytes (default: 1048576)
)
# Use with Client
client = rx.Client(config=config)Connection Pool Settings:
pool_idle_timeout_secs: Controls how long idle connections are kept alive before being closed. Higher values reduce connection overhead but may waste resources on unused connections.pool_max_idle_per_host: Limits the number of idle connections per host. For high-traffic APIs, increase this to avoid connection churn.
HTTP/2 Settings (planned for async client):
- These settings optimize HTTP/2 connection behavior when the async client is released.
http2_keep_alive_interval_secs: How often to send ping frames to keep connections alive.http2_keep_alive_timeout_secs: How long to wait for a ping response before considering the connection dead.- Flow control windows: Adjust for high-latency or high-bandwidth scenarios.
Runtime Settings (for future Tokio integration):
- These settings configure the Tokio runtime when async support is added.
worker_threads: Number of async worker threads (CPU-bound workloads benefit from matching CPU cores).max_blocking_threads: Maximum threads for blocking operations (file I/O, database calls).thread_name: Prefix for debugging thread names.thread_stack_size: Stack size per thread (increase for deep recursion or large stack frames).
Performance Recommendations:
For high-throughput scenarios:
# High-throughput API client
config = rx.Config(
pool_idle_timeout_secs=600, # Keep connections alive longer
pool_max_idle_per_host=2048, # More idle connections per host
)
# Low-latency trading system
config = rx.Config(
pool_idle_timeout_secs=30, # Quick cleanup of unused connections
pool_max_idle_per_host=512, # Focus on recent connections
)
# Memory-constrained environment
config = rx.Config(
pool_idle_timeout_secs=60, # Shorter idle timeout
pool_max_idle_per_host=128, # Fewer idle connections
)# Basic Auth
auth = rx.BasicAuth("username", "password")
# Bearer Token
auth = rx.BearerAuth("your-jwt-token")
# Tuple (auto-converted to BasicAuth)
client = rx.Client(auth=("username", "password"))# Use stream() context manager for large responses
with client.stream("GET", url) as response:
for chunk in response.iter_bytes():
process(chunk)
for line in response.iter_lines():
process_line(line)
for text_chunk in response.iter_text():
process_text(text_chunk)import requestx as rx
try:
response = client.get("https://httpbin.org/status/404")
# This will NOT raise - response is returned
print(response.status_code) # 404
except rx.RequestXException as e:
# Network error (DNS, connection refused, etc.)
print(f"Network error: {e}")
# Explicit error raising
response = client.get("https://httpbin.org/status/500")
response.raise_for_status() # Raises HTTPStatusErrorRequestXException (base)
├── ConnectError, ReadTimeout, WriteTimeout, PoolTimeout
├── NetworkError
├── ProtocolError (LocalProtocolError, RemoteProtocolError)
├── DecodingError, TooManyRedirects
├── HTTPStatusError
├── InvalidURL, InvalidHeader, CookieConflict
└── StreamError
# Build a request without sending
request = client.build_request("POST", "/submit", json={"data": "value"})
# Then send it later
response = client.send(request)# By default, redirects are NOT followed (httpx-compatible)
client = rx.Client(follow_redirects=True) # Enable redirect following# Using Cookies class
cookies = rx.Cookies()
cookies.set("session_id", "abc123")
cookies.set("user_id", "42", domain="example.com")
client = rx.Client(cookies=cookies)
response = client.get("https://example.com")
# Read cookies from response
print(dict(response.cookies))# String proxy
client = rx.Client(proxy="http://proxy:8080")
# Or from environment (automatic with trust_env=True)
client = rx.Client(trust_env=True)RequestX is optimized for high-throughput scenarios:
- Connection pooling: Reuses connections across requests
- Minimal allocations: Pre-allocated vectors, direct byte reading
- Rust core: Zero-copy where possible, efficient memory handling
RequestX: 910 RPS (2.59x faster than requests)
httpx: 611 RPS
requests: 352 RPS (baseline)
CPU Efficiency: 108 RPS/CPU%
Memory Usage: 0.10 MB (minimal footprint)
RequestX is designed to be a drop-in replacement for httpx:
# httpx
import httpx
response = httpx.get("https://example.com")
# RequestX
import requestx as rx
response = rx.get("https://example.com") # Same API!| Feature | httpx | RequestX |
|---|---|---|
| Async support | Yes | No (blocking only) |
| HTTP/2 | Yes | No (planned) |
| Default redirects | Followed | Not followed |
# Run all tests
python -m unittest discover -s tests -v
# Run unit tests only (no HTTP calls)
python -m unittest discover -s tests/unit_test -v
# Run integration tests (requires httpbin)
python -m unittest discover -s tests/integration_test -v# Setup
git clone https://github.com/yourusername/requestx.git
cd requestx
uv sync
# Development build
uv run maturin develop
# Run tests
uv run python -m unittest discover -s tests -v
# Lint
uv run black .
uv run ruff check .
uv run cargo fmt
uv run cargo clippyMIT License - see LICENSE file for details.
Contributions are welcome! Please read the CONTRIBUTING.md file for guidelines.
See CHANGELOG.md for version history and breaking changes.