Skip to content

Natural language to shell command translator with local llm model

License

Notifications You must be signed in to change notification settings

rgcsekaraa/niko-cli

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

70 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Niko

Latest Release License

AI-powered CLI: explain code, generate shell commands, use any LLM provider.

Built in Rust. Works on macOS, Linux, and Windows.

$ niko cmd "find all files larger than 100MB"
find . -type f -size +100M
Copied to clipboard

$ cat main.rs | niko explain
πŸ“– 42 lines analyzed β€” completed in 2.1s
## Overview
...

Features

  • Three Modes β€” cmd, explain, settings
  • Dynamic LLM Providers β€” Any OpenAI-compatible API, Claude, or local Ollama
  • Dynamic Model Selection β€” Fetches available models from the API, no hardcoded lists
  • RAM-Based Restrictions β€” Prevents selecting models too large for your hardware
  • Auto-Install Ollama β€” Installs Ollama automatically if not present
  • Smart Code Chunking β€” Splits large files at function boundaries with context memory between chunks
  • Automatic Retry β€” Exponential backoff for transient failures (timeouts, rate limits, 5xx errors)
  • Connection Pooling β€” Keep-alive HTTP connections for fast sequential LLM calls
  • Command Generation β€” Natural language β†’ shell commands, auto-copied to clipboard
  • Safety Warnings β€” Flags dangerous commands before execution
  • Cross-Platform β€” macOS, Linux (Ubuntu/Debian/etc.), Windows

Install

macOS / Linux

curl -fsSL https://raw.githubusercontent.com/rgcsekaraa/niko-cli/main/install.sh | sh

Windows (PowerShell)

iwr -useb https://raw.githubusercontent.com/rgcsekaraa/niko-cli/main/install.ps1 | iex

From Source (Rust required)

# Install latest version from git
cargo install --git https://github.com/rgcsekaraa/niko-cli

# Or install from local source
cargo install --path .

Quick Start

# First run β€” interactive setup wizard
niko settings configure

This will:

  1. Show available providers (Ollama, OpenAI, Claude, DeepSeek, Grok, Groq, Mistral, Together, OpenRouter, or custom)
  2. For Ollama: auto-install if needed β†’ list local models β†’ show downloadable models filtered by your RAM β†’ let you pick
  3. For API providers: ask for API key β†’ fetch available models from the API β†’ let you pick
  4. Save everything to ~/.niko/config.yaml

Usage

cmd β€” Generate Shell Commands

$ niko cmd "find python files modified today"
find . -name "*.py" -mtime 0
Copied to clipboard

$ niko cmd "kill process on port 3000"
$ niko cmd "compress logs folder to tar.gz"
$ niko cmd "git commits from last week"
$ niko cmd "show disk usage by directory"

explain β€” Explain Code

# From a file
niko explain -f src/main.rs

# Pipe code in
cat complex_module.py | niko explain

# Paste interactively (live line counter, Ctrl-D or two empty lines to finish)
niko explain

For large files, Niko:

  1. Chunks code at function/block boundaries (max 200 lines/chunk)
  2. Carries context β€” each chunk includes overlapping lines and a running summary from previous chunks
  3. Retries failed LLM calls with exponential backoff (3 attempts, 500ms β†’ 4s delay)
  4. Synthesises chunk analyses into an overall summary with follow-up questions

settings β€” Configuration

# Interactive setup wizard
niko settings configure

# Show current config
niko settings show

# Set a value directly
niko settings set openai.api_key sk-xxx
niko settings set openai.model gpt-4o
niko settings set active_provider openai

# Reset to defaults
niko settings init

# Print config path
niko settings path

Override Provider Per-Command

niko cmd "list files" --provider openai
niko explain -f main.rs --provider claude

Reliability & Performance

Niko is designed for production use with reliability and speed:

Feature Details
Streaming Tokens appear immediately as the LLM generates them (all providers)
Retry 3 attempts with exponential backoff (500ms β†’ 2s + jitter)
Retryable errors Timeouts, connection resets, 429/5xx, rate limits, model loading
Connection pooling HTTP keep-alive, 4 idle connections/host, TCP keepalive 30s
Model keep-alive Ollama keeps model in VRAM for 30 min (no reload between calls)
Flash attention Enabled by default for Ollama (faster on Apple Silicon / GPU)
Adaptive tokens cmd mode uses 512 max tokens, explain uses 4096 β€” less KV cache for short tasks
Adaptive context Ollama context window scales with prompt size (4K β†’ 16K)
Empty response guard Detects and retries empty/null LLM responses
Truncation detection Warns when response hits max_tokens (Claude, OpenAI)
Context memory Multi-chunk explanations carry 10-line code overlap for boundary continuity
Structured errors Parses API error responses for clear, actionable messages

Supported Providers

Provider Type How to set up
Ollama Local (free) Auto-installed, models downloaded on demand
OpenAI API niko settings configure β†’ select OpenAI β†’ enter key
Claude API niko settings configure β†’ select Claude β†’ enter key
DeepSeek API niko settings configure β†’ select DeepSeek β†’ enter key
Grok API niko settings configure β†’ select Grok β†’ enter key
Groq API niko settings configure β†’ select Groq β†’ enter key
Mistral API niko settings configure β†’ select Mistral β†’ enter key
Together API niko settings configure β†’ select Together β†’ enter key
OpenRouter API niko settings configure β†’ select OpenRouter β†’ enter key
Custom API niko settings configure β†’ choose "Custom" β†’ enter URL + key

All API providers fetch models dynamically from their /models endpoint β€” nothing is hardcoded.

Environment Variables

API keys can also be set via environment variables:

export OPENAI_API_KEY=sk-xxx
export ANTHROPIC_API_KEY=sk-ant-xxx
export DEEPSEEK_API_KEY=xxx
export GROK_API_KEY=xxx
export GROQ_API_KEY=xxx
export TOGETHER_API_KEY=xxx
export MISTRAL_API_KEY=xxx
export OPENROUTER_API_KEY=xxx

RAM-Based Model Restrictions

For local models (Ollama), Niko estimates the maximum model size your system can handle:

System RAM Max Model Size
8 GB ~4B parameters
16 GB ~12B parameters
32 GB ~28B parameters
64 GB ~60B parameters

Models exceeding your RAM limit are hidden from the selection list. You can still force-select them with a confirmation prompt.


Config File

All settings are stored in ~/.niko/config.yaml. The file uses a dynamic structure β€” providers are a map, so you can add as many as you want:

active_provider: openai
providers:
  ollama:
    kind: ollama
    base_url: http://127.0.0.1:11434
    model: qwen2.5-coder:7b
  openai:
    kind: openai_compat
    api_key: sk-xxx
    base_url: https://api.openai.com/v1
    model: gpt-4o
  claude:
    kind: anthropic
    api_key: sk-ant-xxx
    model: claude-sonnet-4-20250514

Uninstall

rm $(which niko)
rm -rf ~/.niko

License

MIT

About

Natural language to shell command translator with local llm model

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors