A command-line tool that automatically generates conventional commit messages using AI, based on your staged git changes.
Your commit messages will look like this:
- 🤖 AI-powered commit message generation with multiple options:
- Local Ollama support - Completely FREE and private!
- No API key required
- Works offline
- Supports various models (codellama, llama2, etc.)
- Local LMStudio support - Completely FREE and private!
- Works with any model you have in LMStudio
- Uses the OpenAI-compatible API
- Great for privacy and offline use
- OpenRouter (default) using
google/gemini-flash-1.5-8b- SUPER CHEAP!- Around $0.00001/commit -> $1 per 100K commit messages!
- Custom API support - Bring your own provider!
- Local Ollama support - Completely FREE and private!
- 🧠 Adaptive prompting strategies based on model size (small/medium/large)
- 📝 Follows Conventional Commits format
- 🔒 Secure local API key storage
- 🚀 Automatic git commit and push
- 🐛 Debug mode for troubleshooting
- 💻 Cross-platform support (Windows, Linux, macOS)
- Git installed and configured
- For Windows: Git Bash or WSL installed
- For Linux/macOS: Bash shell environment
curlinstalled- One of the following:
- An OpenRouter API key (default)
- Ollama installed and running locally
- LMStudio installed and running locally
- Clone this repository:
git clone https://github.com/mrgoonie/cmai.git
cd cmai- Run the installation script:
./install.shThis will:
- Install the Python script globally as
cmai - Copy prompt templates
- Set up proper permissions
- Create necessary directories
- Clone this repository:
git clone https://github.com/mrgoonie/cmai.git
cd cmai- Run the installation script in Git Bash:
./install.shOr manually:
- Copy
git-commit.pyto%USERPROFILE%\git-commit-ai\ - Copy the
prompts/directory to%USERPROFILE%\git-commit-ai\ - Add the directory to your PATH environment variable
- Rename
git-commit.pytocmai
This will:
- Install the Python script globally as
cmai - Copy prompt templates
- Set up proper permissions
- Create necessary directories
Set up your OpenRouter API key:
cmai <your_openrouter_api_key>The API key will be securely stored in:
- Linux/macOS:
~/.config/git-commit-ai/providers/openrouter.json - Windows:
%USERPROFILE%\.config\git-commit-ai\providers\openrouter.json
- Install Ollama from https://ollama.ai/
- Pull your preferred model (e.g., codellama):
ollama pull deepseek-r1:7b- Make sure Ollama is running in the background
- Install LMStudio from https://lmstudio.ai/
- Download and load your preferred model in LMStudio
- Start the local server in LMStudio by clicking on "Start Server" in the Chat tab
- The server will run on http://localhost:1234/v1 by default
- Make your code changes
- Generate commit message and commit changes:
cmaiTo also push changes to remote:
cmai --push
# or
cmai -pBy default, CMAI uses OpenRouter with the google/gemini-flash-1.5-8b model. You can switch between different providers:
# Use Ollama (local)
cmai --use-ollama
# Use LMStudio (local)
cmai --use-lmstudio
# Switch back to OpenRouter
cmai --use-openrouter
# Use a custom provider
cmai --use-custom http://your-api-endpointThe provider choice is saved for future use, so you only need to specify it once.
When using OpenRouter, you can choose from their available models:
cmai --model qwen/qwen-2.5-coder-32b-instructList of available models: https://openrouter.ai/models
When using Ollama, first pull your desired model:
# Pull the model
ollama pull deepseek-r1:7b
# Use the model
cmai --model deepseek-r1:7bList of available models: https://ollama.ai/library
Popular models for commit messages:
deepseek-r1- Optimized for code understandingllama2- Good all-around performancemistral- Fast and efficient
This will:
- Stage all changes
- Generate a commit message using AI
- Commit the changes
- Push to the remote repository (if --push flag is used)
To see detailed information about what's happening:
cmai --debugYou can combine flags:
cmai --debug --pushUsage: cmai [options] [api_key]
Options:
--debug Enable debug mode
--push, -p Push changes after commit
--model <model> Use specific model (default: google/gemini-flash-1.5-8b)
--use-ollama Use Ollama as provider (saves for future use)
--use-lmstudio Use LMStudio as provider (saves for future use)
--use-openrouter Use OpenRouter as provider (saves for future use)
--use-custom <url> Use custom provider with base URL (saves for future use)
-h, --help Show this help message# First time setup with API key
cmai <your_openrouter_api_key>
# Normal usage
cmai
# Use a different OpenRouter model
cmai --model "google/gemini-flash-1.5-8b"
# Debug mode with push
cmai --debug --push# Switch to Ollama provider
cmai --use-ollama
# Use a specific deepseek-r1 model
cmai --model deepseek-r1:7b
# Debug mode with Ollama
cmai --debug --use-ollama# Switch to LMStudio provider
cmai --use-lmstudio
# Use a specific model in LMStudio
cmai --model "your-model-name"
# Debug mode with LMStudio
cmai --debug --use-lmstudio# Use a custom API provider
cmai --use-custom http://my-api.com
# Use custom provider with specific model
cmai --use-custom http://my-api.com --model my-custom-model# Commit and push
cmai --push
# or
cmai -p
# Debug mode
cmai --debug
# Use a different API endpoint
cmai --base-url https://api.example.com/v1
# Combine multiple flags
cmai --debug --push --model your-model --base-url https://api.example.com/v1Example generated commit messages:
feat(api): add user authentication systemfix(data): resolve memory leak in data processingdocs(api): update API documentationstyle(ui): improve responsive layout for mobile devices
~
├── git-commit-ai/
│ ├── git-commit.py
│ └── prompts/ # AI prompt templates
├── .config/
│ └── git-commit-ai/
│ ├── config.json # Configuration (API keys, models, providers)
│ └── providers/ # Provider-specific configurations
└── bin/
└── cmai -> ~/git-commit-ai/git-commit.py
%USERPROFILE%
├── git-commit-ai/
│ ├── git-commit.py
│ └── prompts/
└── .config/
└── git-commit-ai/
├── config.json
└── providers/
- API key is stored locally with restricted permissions (600)
- Configuration directory is protected (700)
- No data is stored or logged except the API key
- All communication is done via HTTPS
-
No API key found
- Run
cmai your_openrouter_api_keyto configure
- Run
-
Permission denied
- Check file permissions:
ls -la ~/.config/git-commit-ai - Should show:
drwx------for directory and-rw-------for config file
- Check file permissions:
-
Debug mode
- Run with
--debugflag to see detailed logs - Check API responses and git operations
- Run with
-
Windows-specific issues
- Make sure Git Bash is installed
- Check if curl is available in Git Bash
- Verify PATH environment variable includes the installation directory
bash
sudo rm /usr/local/bin/cmai
rm -rf ~/git-commit-ai
rm -rf ~/.config/git-commit-airm -rf "$USERPROFILE/git-commit-ai"
rm -rf "$USERPROFILE/.config/git-commit-ai"Then remove the directory from your PATH environment variable
This project includes an automated testing framework to evaluate commit message generation quality across different AI models and scenarios.
Test a specific provider and model with multiple scenarios:
python3 tests/auto_tester.py [--provider PROVIDER] [--model MODEL] [rounds]--provider: Provider to use (ollama, openrouter) - default: ollama--model: Model name - default: qwen3:4brounds: Number of test rounds per scenario - default: 3
Examples:
# Test default ollama:qwen3:4b with 5 rounds
python3 tests/auto_tester.py 5
# Test specific Ollama model
python3 tests/auto_tester.py --provider ollama --model qwen2.5:3b 3
# Test OpenRouter model
python3 tests/auto_tester.py --provider openrouter --model qwen/qwen-2.5-coder-32b-instruct:free 2Compare multiple models side by side:
python3 tests/auto_tester.py --multi model1,model2,model3 [rounds]Examples:
# Compare Ollama models
python3 tests/auto_tester.py --multi qwen3:4b,qwen2.5:3b,qwen3:1.7b 2
# Compare different providers
python3 tests/auto_tester.py --multi qwen3:4b,qwen/qwen-2.5-coder-32b-instruct:free 3python3 tests/auto_tester.py --helpThe framework includes 9 test scenarios covering different commit types:
- simple_fix: Basic bug fixes
- new_feature: New functionality implementation
- documentation_update: README and docs changes
- refactor_large: Major code restructuring
- style_formatting: Code formatting and style changes
- performance_optimization: Performance improvements
- test_addition: Adding new tests
- chore_dependencies: Dependency updates
- adaptive_prompting_feature: Complex feature implementation (external templates)
Tests generate detailed reports including:
- Overall accuracy statistics
- Per-scenario consistency analysis
- Common issues identification
- Model comparison metrics
- Recommendations for improvement
Reports are saved to tests/test_report_YYYYMMDD_HHMMSS.txt and also displayed in the console.
-
For Ollama models: Ensure Ollama is running and models are pulled
ollama serve ollama pull qwen3:4b ollama pull qwen2.5:3b ollama pull qwen3:1.7b
-
For OpenRouter models: Ensure API key is configured
cmai --api-key your_api_key
- Fork the repository
- Create your feature branch
- Commit your changes (using
cmai😉) - Test your changes using the automated testing framework
- Push to the branch
- Create a Pull Request
MIT License - see LICENSE file for details
- OpenRouter for providing the AI API
- Conventional Commits for the commit message format
- DigiCord AI - The Most Useful AI Chatbot on Discord
- IndieBacklink.com - Indie Makers Unite: Feature, Support, Succeed
- TopRanking.ai - AI Directory, listing AI products
- ZII.ONE - Personalized Link Shortener
- VidCap.xyz - Extract Youtube caption, download videos, capture screenshot, summarize,…
- ReadTube.me - Write blog articles based on Youtube videos
- BoostTogether.com - The Power of WE in Advertising
- AIVN.Site - Face Swap, Remove BG, Photo Editor,…
- DxUp.dev - Developer-focused platform for app deployment & centralized cloud resource management.

