"The keyboard hums, the screen aglow,
AI's wisdom, a steady flow.
Will robots take over, it's quite the fright,
Or just provide insights, day and night?
We ponder and chat, with code as our guide,
Is AI our helper or our human pride?"
Perspt (pronounced "perspect," short for Personal Spectrum Pertaining Thoughts) is a high-performance command-line interface (CLI) application that gives you a peek into the mind of Large Language Models (LLMs). Built with Rust for speed and reliability, it allows you to chat with various AI models from multiple providers directly in your terminal using the modern genai crate's unified API.
perspt_screencast.mp4
- 🚀 Latest Model Support: Built on the modern
genaicrate with support for state-of-the-art models like OpenAI GPT-5.2, Google Gemini 3, and Anthropic Claude Opus 4.5 - ⚡ Real-time Streaming: Ultra-responsive streaming responses with proper reasoning chunk handling
- 🛡️ Rock-solid Reliability: Comprehensive panic recovery and error handling that keeps your terminal safe
- 🎨 Beautiful Interface: Modern terminal UI with markdown rendering and smooth animations
- 🤖 Zero-Config Startup: Automatic provider detection from environment variables - just set your API key and go!
- 🔧 Flexible Configuration: CLI arguments, environment variables, and JSON config files all work seamlessly
- 🤖 SRBN Agent Mode: NEW! Autonomous coding assistant using Stabilized Recursive Barrier Networks - decomposes tasks, generates code, verifies via LSP, and self-corrects errors.
- 🎨 Interactive Chat Interface: A colorful and responsive chat interface powered by Ratatui with smooth scrolling and custom markdown rendering.
- 🖥️ Simple CLI Mode: Minimal command-line mode for direct Q&A without TUI overlay - perfect for scripting, accessibility, or Unix-style workflows.
- ⚡ Advanced Streaming: Real-time streaming of LLM responses with support for reasoning chunks and proper event handling.
- 🔬 LSP Integration: Built-in Language Server Protocol client using
tyfor Python - provides real-time type checking and error detection. - 🧪 Test Runner: Integrated pytest runner with V_log (Logic Energy) calculation from weighted test failures.
- 🤖 Automatic Provider Detection: Zero-config startup that detects and uses available providers based on environment variables.
- 🔀 Latest Provider Support: Built on the modern
genaicrate with support for cutting-edge models. - 📊 Token Budget Tracking: Tracks input/output tokens and cost estimation with configurable limits.
- 🔧 Retry Policy: PSP-4 compliant retry limits (3 for compilation, 5 for tools) with automatic escalation.
- 💾 Conversation Export: Save your chat conversations to text files using the
/savecommand. - 📜 Custom Markdown Parser: Built-in markdown parser optimized for terminal rendering.
- 🛡️ Graceful Error Handling: Robust handling of network issues, API errors, and edge cases.
Perspt features intelligent automatic provider detection. Simply set an environment variable for any supported provider, and Perspt will automatically detect and use it!
Priority Detection Order:
- OpenAI (
OPENAI_API_KEY) - Anthropic (
ANTHROPIC_API_KEY) - Google Gemini (
GEMINI_API_KEY) - Groq (
GROQ_API_KEY) - Cohere (
COHERE_API_KEY) - XAI (
XAI_API_KEY) - DeepSeek (
DEEPSEEK_API_KEY) - Ollama (no API key needed - auto-detected if running)
Quick Start:
# Set your API key
export OPENAI_API_KEY="sk-your-openai-key"
# That's it! Start chatting
./target/release/persptRead the perspt book - This illustrated guide walks through the project and explains key concepts.
- Rust: Ensure you have the Rust toolchain installed. Get it from rustup.rs.
- 🔑 LLM API Key: For cloud providers, you'll need an API key:
- OpenAI: platform.openai.com (supports GPT-5.2, o3-mini, o1-preview)
- Anthropic: console.anthropic.com (supports Claude Opus 4.5)
- Google Gemini: aistudio.google.com (supports Gemini 3 Flash/Pro)
- Groq: console.groq.com
- Cohere: dashboard.cohere.com
- XAI: console.x.ai
- DeepSeek: platform.deepseek.com
- Ollama: ollama.ai (no API key needed - local models)
# Clone the repository
git clone https://github.com/eonseed/perspt.git
cd perspt
# Build the project
cargo build --release
# Run Perspt
./target/release/persptPerspt can be configured using environment variables, a config.json file, or command-line arguments.
Environment Variables (Recommended):
export OPENAI_API_KEY="sk-your-key"
./target/release/persptConfig File (config.json):
{
"provider_type": "openai",
"default_model": "gpt-5.2",
"api_key": "sk-your-api-key"
}CLI Arguments:
perspt --provider-type anthropic --model claude-opus-4.5
perspt --list-models # List available models| Option | Description |
|---|---|
-c, --config <FILE> |
Path to configuration file |
-p, --provider-type <TYPE> |
Provider: openai, anthropic, gemini, groq, cohere, xai, deepseek, ollama |
-k, --api-key <KEY> |
API key for the provider |
-m, --model <MODEL> |
Model name (e.g., gpt-5.2, claude-opus-4.5) |
--provider <PROFILE> |
Provider profile from config |
-l, --list-models |
List available models |
simple-chat |
Use simple CLI mode (no TUI) |
--log-file <FILE> |
Log session to file (simple-chat only) |
Agent Mode uses the Stabilized Recursive Barrier Network (SRBN) to autonomously decompose coding tasks, generate code, and verify correctness via LSP diagnostics.
# Basic agent mode - create a Python project
perspt agent "Create a Python calculator with add, subtract, multiply, divide"
# With explicit workspace directory
perspt agent -w /path/to/project "Add unit tests for the existing API"
# Auto-approve all actions (no prompts)
perspt agent -y "Refactor the parser for better error handling"The SRBN control loop executes these steps for each task:
- Sheafification - Architect decomposes task into JSON TaskPlan
- Speculation - Actuator generates code for each sub-task
-
Verification - LSP diagnostics compute Lyapunov Energy
$V(x)$ -
Convergence - If
$V(x) > \epsilon$ , retry with error feedback - Commit - When stable, record changes in Merkle Ledger
Lyapunov Energy:
| Component | Source | Default Weight |
|---|---|---|
| LSP diagnostics (errors, warnings) | ||
| Structural analysis | ||
| Test failures (weighted by criticality) |
perspt agent [OPTIONS] <TASK>
Options:
-w, --workspace <DIR> Working directory (default: current)
-y, --yes Auto-approve all actions
-k, --complexity <K> Max tasks before approval (default: 5)
--architect-model <M> Model for planning
--actuator-model <M> Model for code generation
--max-tokens <N> Token budget limit (default: 100000)
--max-cost <USD> Maximum cost in dollars| Error Type | Max Retries | Action on Exhaustion |
|---|---|---|
| Compilation errors | 3 | Escalate to user |
| Tool failures | 5 | Escalate to user |
| Review rejections | 3 | Escalate to user |
A minimal, Unix-like command prompt interface for direct Q&A:
# Basic simple CLI mode
perspt simple-chat
# With session logging
perspt simple-chat --log-file session.txt
# Perfect for scripting
echo "What is quantum computing?" | perspt simple-chat| Command | Description |
|---|---|
/save |
Save conversation with timestamp |
/save <file> |
Save to specific file |
| Key | Action |
|---|---|
| Enter | Send message |
| Esc | Exit application |
| Ctrl+C / Ctrl+D | Exit with cleanup |
| ↑/↓ Arrow Keys | Scroll chat history |
| Page Up/Down | Fast scroll |
Ollama provides local AI models without API keys or internet connectivity.
# Install Ollama
brew install ollama # macOS
# or: curl -fsSL https://ollama.ai/install.sh | sh # Linux
# Start and pull a model
ollama serve
ollama pull llama3.2
# Use with Perspt
perspt --provider-type ollama --model llama3.2- 🔒 Privacy: All processing happens locally
- 💰 Cost-effective: No API fees or usage limits
- ⚡ Offline capable: Works without internet
Perspt is organized as a Cargo workspace:
perspt/crates/
├── perspt-cli # CLI entry point
├── perspt-core # Config, LLM provider (genai)
├── perspt-tui # Terminal UI (Ratatui)
├── perspt-agent # SRBN orchestrator, tools, LSP
├── perspt-policy # Security sandbox
└── perspt-sandbox # Process isolation (future)
- genai crate (v0.3.5): Unified access to all LLM providers with streaming support
- Custom Markdown Parser: Built-in parser optimized for terminal rendering
- Ratatui TUI: Modern terminal UI framework with responsive design
- Tokio Async Runtime: Efficient concurrent operations and streaming
"API key not found" error:
# Use environment variable
export OPENAI_API_KEY="your-key-here"
# Or use CLI argument
perspt --provider-type openai --api-key YOUR_KEYConnection timeout:
- Check internet connection
- Verify API key is valid
- Try a different model
Ollama not connecting:
# Ensure Ollama is running
ollama serve
# Check connection
curl http://localhost:11434/api/tagsContributions are welcome! See CONTRIBUTING.md for guidelines.
# Run tests
cargo test --workspace
# Check formatting
cargo fmt --checkThis project is licensed under the LGPL-3.0 License - see the LICENSE file for details.
- genai - Unified LLM provider access
- Ratatui - Terminal UI framework
- Tokio - Async runtime
- All the LLM providers for their amazing APIs
Made with ❤️ by the Perspt Team
Your Terminal's Window to the AI World 🤖