0111.3.mov
A local-first AI-powered load testing tool that accepts natural language commands to perform API load testing. The system uses AI models to parse user prompts and convert them into structured load test specifications that can be executed using K6.
StressMaster supports multiple AI providers for natural language parsing:
- Claude - Claude 3 models via direct API or OpenRouter
- OpenRouter - Access to multiple AI models through OpenRouter
- OpenAI - GPT-3.5, GPT-4, and other OpenAI models
- Google Gemini - Gemini Pro and other Google AI models
- Natural Language Interface: Describe load tests in plain English
- Multiple Test Types: Spike, stress, endurance, volume, and baseline testing
- K6 Integration: Generates and executes K6 scripts automatically
- Real-time Monitoring: Live progress tracking and metrics
- Comprehensive Reporting: Detailed analysis with AI-powered recommendations
- Export Formats: JSON, CSV, and HTML export capabilities
- Cloud AI Integration: Supports multiple cloud AI providers (Claude, OpenAI, Gemini, OpenRouter)
- No Local AI Required: Uses cloud-based AI models for natural language parsing
βββββββββββββββββββ ββββββββββββββββββββ βββββββββββββββββββ
β User Input βββββΆβ AI Parser βββββΆβ K6 Generator β
β (Natural Lang.) β β (AI Model) β β β
βββββββββββββββββββ ββββββββββββββββββββ βββββββββββββββββββ
β
βββββββββββββββββββ ββββββββββββββββββββ βββββββββββββββββββ
β Results & ββββββ Test Executor ββββββ Load Test β
β Recommendations β β (K6) β β Orchestrator β
βββββββββββββββββββ ββββββββββββββββββββ βββββββββββββββββββ
- Node.js: Version 18.0 or higher
- npm: Version 9.0 or higher
- K6: Installed and available in PATH (for load test execution)
- Internet Access: Required for AI provider API calls
# Install from npm
npm install -g stressmaster
# Verify installation
stressmaster --version# Clone the repository
git clone https://github.com/mumzworld-tech/StressMaster.git
cd StressMaster
# Install dependencies
npm install
# Build the project
npm run build
# Install globally
npm install -g .To test StressMaster locally in another project before publishing:
# In StressMaster directory
npm install
npm run build
npm link
# In your test project directory
npm link stressmaster
# Now use StressMaster in your project
stressmaster --versionπ See LOCAL_TESTING_GUIDE.md for detailed instructions on:
- Testing StressMaster locally using
npm link - Using
npm packfor production-like testing - Verifying file resolution in other projects
- Configuration when testing locally
After installation, you can immediately start using StressMaster:
# Interactive mode
stressmaster
# Direct command
stressmaster "send 10 GET requests to https://httpbin.org/get"
# Spike test
stressmaster "spike test with 50 requests in 30 seconds to https://api.example.com"
# Export results
stressmaster export htmlTry your first load test:
stressmaster "Send 100 GET requests to https://httpbin.org/get over 30 seconds"β Yes! StressMaster fully supports testing localhost APIs. You can test your local backend applications directly:
# Test your local API
stressmaster "send 100 POST requests to http://localhost:3000/api/v1/users"
# Test with different ports
stressmaster "spike test with 50 requests to http://localhost:8080/api/products"
# Test with headers and payload
stressmaster "send 10 POST requests to http://localhost:5000/api/orders with header Authorization Bearer token123 and JSON body @payload.json"Key Points:
- β
Works with
http://localhostorhttp://127.0.0.1 - β
Supports any port (e.g.,
:3000,:8080,:5000) - β Works with local API development servers
- β No special configuration needed - just use the localhost URL
Example: Testing Your Local Backend
# Start your local API server (e.g., Express, FastAPI, etc.)
# Then run StressMaster:
stressmaster "send 50 GET requests to http://localhost:3000/api/v1/users"
stressmaster "POST 20 requests to http://localhost:3000/api/v1/orders with JSON body @order-data.json increment orderId"After installation, run the interactive setup wizard to configure everything automatically:
stressmaster setupThis wizard will:
- β Guide you through choosing your AI provider (Ollama, OpenAI, Claude, Gemini)
- β Prompt for API keys and configuration
- β
Create
config/ai-config.jsonfile automatically - β
Optionally create a
.envfile for environment variables - β Show you next steps
That's it! The setup wizard handles all the configuration for you.
Important: When StressMaster is installed as an npm package, all configuration is stored in your project directory (where you run the command), not in StressMaster's installation directory.
StressMaster loads configuration in this priority order:
- Environment Variables (highest priority)
- Config File (
.stressmaster/config/ai-config.jsonin your project) - package.json (in a
stressmastersection) - Defaults (lowest priority)
Create a .env file in your project directory:
# In your project directory (e.g., /path/to/your/project/.env)
AI_PROVIDER=claude
AI_API_KEY=your-api-key-here
AI_MODEL=claude-3-5-sonnet-20241022
# Or for OpenAI
AI_PROVIDER=openai
OPENAI_API_KEY=sk-your-key-here
AI_MODEL=gpt-3.5-turboThen load it (if you're using a tool like dotenv):
# Your project can load .env automatically, or use:
export $(cat .env | xargs)Create .stressmaster/config/ai-config.json in your project directory:
# Your project structure:
your-project/
βββ .stressmaster/
β βββ config/
β βββ ai-config.json # β Created by StressMaster or setup/switch scripts
βββ .env # β Or use this for env vars
βββ package.jsonFile location: .stressmaster/config/ai-config.json in your project directory (where you run stressmaster)
Add configuration to your project's package.json:
{
"name": "my-project",
"stressmaster": {
"provider": "claude",
"apiKey": "your-api-key",
"model": "claude-3-5-sonnet-20241022"
}
}StressMaster automatically creates a configuration file on first use. You can switch between AI providers using simple commands:
Use the interactive setup wizard to switch providers:
stressmaster setupOr manually edit the configuration file (see below).
The AI configuration is stored in .stressmaster/config/ai-config.json (automatically created on first use):
{
"provider": "claude",
"model": "claude-3-5-sonnet-20241022",
"endpoint": "https://api.anthropic.com/v1",
"maxRetries": 3,
"timeout": 30000,
"options": {
"temperature": 0.1
}
}- OpenAI: Get API key from OpenAI and configure via
stressmaster setup - Claude: Get API key from Anthropic and configure via
stressmaster setup - OpenRouter: Get API key from OpenRouter and configure via
stressmaster setup - Gemini: Get API key from Google AI and configure via
stressmaster setup
Note: The
config/ai-config.jsonfile contains API keys and is automatically excluded from git. Useconfig/ai-config.example.jsonas a reference.
StressMaster provides a powerful command-line interface with natural language processing:
# Show help
stressmaster --help
sm --help
# Show version
stressmaster --version
sm --version
# Interactive mode
stressmaster
# Run a test directly
stressmaster "send 10 GET requests to https://httpbin.org/get"
# Export results
stressmaster export html
sm export json --include-raw# Basic GET test
stressmaster "send 5 GET requests to https://httpbin.org/get"
# POST with JSON payload
stressmaster "POST 20 requests with JSON payload to https://api.example.com/users"
# Spike test
stressmaster "spike test with 100 requests in 60 seconds to https://api.example.com"
# Ramp-up test
stressmaster "ramp up from 10 to 100 requests over 2 minutes to https://api.example.com"
# Stress test
stressmaster "stress test with 500 requests to https://api.example.com"
# Random burst test
stressmaster "random burst test with 50 requests to https://api.example.com"# Export to different formats
stressmaster export json
stressmaster export csv
stressmaster export html
# Include raw data
stressmaster export json --include-raw
# Include recommendations
stressmaster export html --include-recommendationsWhen you run stressmaster without arguments, you enter interactive mode where you can use structured commands:
Configuration Commands:
ββ stressmaster β― config show # Show current configuration
ββ stressmaster β― config set key value # Set configuration value
ββ stressmaster β― config init # Initialize configurationFile Management:
ββ stressmaster β― file list # List all files
ββ stressmaster β― file list *.json # List JSON files
ββ stressmaster β― file validate @file.json # Validate file reference
ββ stressmaster β― file search pattern # Search for filesResults & Export:
ββ stressmaster β― results list # List recent test results
ββ stressmaster β― results show <id> # Show detailed result
ββ stressmaster β― export json # Export last result as JSON
ββ stressmaster β― export csv # Export as CSV
ββ stressmaster β― export html # Export as HTML reportOpenAPI Integration:
ββ stressmaster β― openapi parse @api.yaml # Parse OpenAPI spec
ββ stressmaster β― openapi list @api.yaml # List endpoints
ββ stressmaster β― openapi payloads @api.yaml # Generate payloads
ββ stressmaster β― openapi curl @api.yaml # Generate cURL commandsFile Autocomplete: Press Tab after typing @ to see file suggestions!
stressmaster- Full command namesm- Short alias for quick commands
StressMaster supports multiple AI model configurations. Choose the setup that best fits your needs:
Use Anthropic Claude directly or via OpenRouter for reliable, high-quality parsing.
Advantages: Better performance, more reliable, no local setup
-
Get OpenAI API Key:
- Visit OpenAI Platform
- Create a new API key
- Copy the key
-
Configure StressMaster:
# Edit your .env file AI_PROVIDER=openai OPENAI_API_KEY=your-api-key-here OPENAI_MODEL=gpt-4 # or use gpt-3.5-turbo for cost savings
-
Test Configuration:
# Test the API connection curl -H "Authorization: Bearer your-api-key" \ https://api.openai.com/v1/models
Advantages: Excellent reasoning, good for complex parsing
-
Get Anthropic API Key:
- Visit Anthropic Console
- Create a new API key
- Copy the key
-
Configure StressMaster:
# Edit your .env file AI_PROVIDER=anthropic ANTHROPIC_API_KEY=your-api-key-here ANTHROPIC_MODEL=claude-3-sonnet-20240229
Advantages: Good performance, competitive pricing
-
Get Google API Key:
- Visit Google AI Studio
- Create a new API key
- Copy the key
-
Configure StressMaster:
# Edit your .env file AI_PROVIDER=gemini GEMINI_API_KEY=your-api-key-here GEMINI_MODEL=gemini-pro
See .stressmaster/config/ai-config.json and config/ai-config.example.json for up-to-date examples of configuring Claude, OpenRouter, OpenAI, or Gemini.
| Provider | Model | Cost | Performance | Setup Complexity |
|---|---|---|---|---|
| OpenAI | GPT-3.5-turbo | $0.0015/1K tokens | Excellent | Easy |
| OpenAI | GPT-4 | $0.03/1K tokens | Best | Easy |
| Anthropic | Claude 3 Sonnet | $0.003/1K tokens | Excellent | Easy |
| Gemini Pro | $0.0005/1K tokens | Good | Easy |
# Test OpenAI
curl -H "Authorization: Bearer your-key" \
https://api.openai.com/v1/models
# Test Anthropic
curl -H "x-api-key: your-key" \
https://api.anthropic.com/v1/models
# Test Gemini
curl "https://generativelanguage.googleapis.com/v1beta/models?key=your-key"# Basic localhost test
stressmaster "send 10 GET requests to http://localhost:3000/api/v1/users"
# POST with localhost
stressmaster "POST 20 requests to http://localhost:8080/api/orders with JSON body @payload.json"
# Spike test on local API
stressmaster "spike test with 100 requests in 30 seconds to http://localhost:5000/api/products"
# Test with headers
stressmaster "send 50 POST requests to http://localhost:3000/api/auth/login with header Content-Type application/json and JSON body @login.json"
# Increment variables in localhost tests
stressmaster "send 10 POST requests to http://localhost:3000/api/users with JSON body @user-data.json increment userId"# Simple GET request
stressmaster "send 50 GET requests to https://api.example.com/users"
# POST with JSON payload
stressmaster "POST 200 requests to https://api.example.com/orders with JSON body @order.json"
# POST with inline JSON
stressmaster "POST 10 requests to https://api.example.com/users with JSON body {\"name\":\"test\",\"email\":\"test@example.com\"}"# Spike test - sudden load increase
stressmaster "spike test with 1000 requests in 10 seconds to https://api.example.com/products"
# Stress test with ramp-up
stressmaster "stress test starting with 10 users, ramping up to 500 users over 10 minutes to https://api.example.com/search"
# Endurance test - long duration
stressmaster "endurance test with 50 constant users for 2 hours to https://api.example.com/health"
# Volume test - high concurrency
stressmaster "volume test with 500 concurrent users for 5 minutes to https://api.example.com/data"
# Baseline test - establish baseline
stressmaster "baseline test with 10 requests to https://api.example.com/users"Maintain 100 requests per second to https://api.example.com/data for 10 minutes
Start with 10 RPS, increase to 200 RPS over 5 minutes, then maintain for 15 minutes
Load test in steps: 50 users for 2 minutes, then 100 users for 2 minutes, then 200 users for 2 minutes
Key configuration options in .env:
# Application settings
NODE_ENV=production
APP_PORT=3000
# AI Provider settings
AI_PROVIDER=claude
AI_MODEL=claude-3-5-sonnet-20241022
# API Keys (if using cloud providers)
OPENAI_API_KEY=your-openai-key
ANTHROPIC_API_KEY=your-anthropic-key
GEMINI_API_KEY=your-gemini-key
# Resource limits
APP_MEMORY_LIMIT=1g
K6_MEMORY_LIMIT=2gThe AI can generate various payload types:
- Random IDs:
{randomId},{uuid} - Timestamps:
{timestamp},{isoDate} - Random Data:
{randomString},{randomNumber} - Sequential Data:
{sequence},{counter}
Example:
POST to https://api.example.com/users with payload:
{
"id": "{uuid}",
"name": "{randomString}",
"email": "user{sequence}@example.com",
"timestamp": "{isoDate}"
}
The tool provides comprehensive metrics:
- Response Times: Min, max, average, and percentiles (50th, 90th, 95th, 99th)
- Throughput: Requests per second and bytes per second
- Error Rates: Success/failure ratios and error categorization
- Resource Usage: CPU and memory consumption during tests
After each test, the AI analyzes results and provides:
- Performance bottleneck identification
- Optimization suggestions
- Capacity planning recommendations
- Error pattern analysis
Results can be exported in multiple formats:
- JSON: Raw data for programmatic analysis
- CSV: Spreadsheet-compatible format
- HTML: Rich visual reports with charts
Test API with JWT authentication:
1. POST login to get token
2. Use token for subsequent requests
3. Test 500 authenticated requests per minute
Send POST requests to https://api.example.com/orders with complex JSON:
{
"orderId": "{sequence}",
"customer": {
"name": "{randomString}",
"email": "customer{sequence}@example.com"
},
"items": [
{
"productId": "PROD-{randomNumber}",
"quantity": "{randomNumber:1-10}"
}
]
}
For high-volume or long-duration tests, ensure you have sufficient system resources:
- Memory: K6 executor may require additional memory for large tests
- Network: Ensure stable internet connection for AI API calls
- Storage: Test results are stored locally in
.stressmaster/directory
Verify your setup:
# Check StressMaster installation
stressmaster --version
# Check K6 installation
k6 version
# Test AI provider configuration
stressmaster setupMonitor system resources using your OS tools (Activity Monitor on macOS, Task Manager on Windows, htop on Linux).
# Verify target API accessibility
curl -I https://your-target-api.com
# Check K6 installation
k6 version
# Verify AI provider configuration
stressmaster setup- All API calls use HTTPS
- Input validation on all user inputs
- Secure storage of API keys in configuration files
- API keys stored locally in
.stressmaster/config/ai-config.json(excluded from git) - Test results stored locally in
.stressmaster/directory - No data sent to external services except configured AI providers
npm install -g stressmastergit clone https://github.com/mumzworld-tech/StressMaster.git
cd StressMaster
npm install
npm run build
npm link# Clone repository
git clone <repository-url>
cd stressmaster
# Install dependencies
npm install
# Run in development mode
npm run dev
# Run tests
npm testsrc/
βββ interfaces/ # User interfaces
β βββ cli/ # Command-line interface
βββ core/ # Core functionality
β βββ parser/ # AI command parsing
β βββ generator/ # K6 script generation
β βββ executor/ # Test execution
β βββ analyzer/ # Results analysis
βββ types/ # TypeScript definitions
βββ utils/ # Utility functions
This project is licensed under the MIT License - see the LICENSE file for details.
For support and questions:
- Review the examples in this README for usage patterns
- Check the CHANGELOG.md for recent updates
- Open an issue on GitHub for bugs or feature requests