Open-source, context-aware voice transcription for Linux
An open-source alternative to SuperWhisper (Mac-only), combining OpenAI's Whisper speech-to-text with LLM-powered intelligence for smart, accurate transcriptions that adapt to your workflow.
UltraWhisper goes beyond basic speech-to-text by understanding what you're working on and adapting its transcription accordingly. Whether you're coding in VS Code, browsing GitHub, or working in a terminal, it delivers transcriptions that fit seamlessly into your context.
# Run directly with uvx - no installation needed!
# Setup your config
uvx ultrawhisper setup
# Run it
uvx ultrawhisperContext-Aware Transcription
- Automatically detects your active application (VS Code, Chrome, terminal, etc.)
- Adapts transcription to preserve code syntax, technical terms, and domain-specific language
LLM-Powered Correction
- Cleans up Whisper transcription using GPT-4, Claude, or local models
- Applies application-specific prompts for better accuracy
- Gracefully degrades to raw Whisper output if LLM is unavailable
Multi-Provider LLM Support
- OpenAI, Anthropic, Local Models (OpenAI-compatible)
Flexible Input Methods
- Double-tap: Quickly tap a key twice to toggle recording
- Push-to-talk: Hold to record, release to transcribe
Beautiful Terminal Interface
- Interactive TUI built with prompt-toolkit
- Real-time status display showing LLM connection, context, and system state
- Live logs and configuration visibility
Chat Mode (Conversational AI)
- Voice conversations with your AI assistant
- Maintains conversation history across questions
- Context-aware responses based on your active application
- TTS support for spoken responses
- MCP (Model Context Protocol) integration for extended capabilities
- Web search enabled by default
Privacy-First
- Use local LLMs for complete offline operation
- No data leaves your machine when using local models
For regular use, install from PyPI:
# Install with uv
uv pip install ultrawhisper
# Or with pip
pip install ultrawhisper
# Run interactive setup
ultrawhisper setup
# Run it
ultrawhisperConfiguration is stored at ~/.config/ultrawhisper/config.yml. See config.example.yml for a complete example with all options.
UltraWhisper dynamically builds LLM prompts by combining:
- Base prompt from your configuration
- Application-specific prompts (VS Code, Chrome, terminals, etc.)
- Pattern matching against window titles (GitHub, Stack Overflow, etc.)
This ensures your transcriptions are corrected appropriately for your current context.
Switch between Transcription Mode and Question Mode (soon to be called Chat Mode):
- Python: 3.10 or higher
- Operating System: Linux (X11) for full context detection
- Optional Dependencies:
xdotool- For advanced context detectionx11-utils- For window property detectionespeakorfestival- For system TTS (question mode)
# Ubuntu/Debian
sudo apt install xdotool x11-utils espeak
# Arch Linux
sudo pacman -S xdotool xorg-xprop espeak
# Fedora
sudo dnf install xdotool xorg-x11-utils espeakWant to contribute or modify UltraWhisper? Here's how to set up a development environment:
# Clone the repository
git clone https://github.com/casonclagg/ultrawhisper.git
cd ultrawhisper
# Install dependencies
uv sync
# Run from source
uv run ultrawhisper
# Code formatting
uv run black src/
# Type checking
uv run mypy src/
# Linting
uv run flake8 src/
# Build package
uv buildUltraWhisper uses an orchestrator pattern where TranscriptionApp coordinates:
- Audio recording via configurable backends
- Whisper transcription (local or API)
- Context detection from active window
- LLM correction with context-aware prompts
- Text output to clipboard or active window
MIT License - See LICENSE for details
Contributions are welcome! Please feel free to submit a Pull Request.
Cason Clagg - GitHub
- Built with OpenAI Whisper
- Uses faster-whisper for optimized inference
- Powered by OpenAI and Anthropic LLMs
- Terminal UI built with prompt-toolkit
