Intelligent log monitoring with AI-powered analysis and Discord notifications
Robolog automatically monitors your Linux system logs, detects critical issues, and sends intelligent summaries to Discord (or other webhook service) using AI analysis powered by Ollama and Gemma 3n (default).
includes nextjs dashboard with nginx requirement
# Install with the optional Next.js web dashboard (requires Nginx)
curl -fsSL https://raw.githubusercontent.com/Hilo-Inc/robolog/main/install-native.sh | sudo bash -s -- --with-dashboardInstallation Options:
# Standard installation (prompts for AI model and language selection)
curl -fsSL https://raw.githubusercontent.com/Hilo-Inc/robolog/main/install-native.sh | sudo bash
# Skip AI model download (faster, download later)
curl -fsSL https://raw.githubusercontent.com/Hilo-Inc/robolog/main/install-native.sh | sudo bash -s -- --skip-model
# Auto-download specific model with language preference
curl -fsSL https://raw.githubusercontent.com/Hilo-Inc/robolog/main/install-native.sh | sudo bash -s -- --yes --model gemma3n:e2b --language English --platform slack
# Available options:
# --model gemma3n:e2b (5.6GB) - Google Gemma 3n [default]
# --model qwen3:8b (5.2GB) - Alibaba Qwen 3 with thinking mode
# --model llama3.2:1b (1.3GB) - Meta LLaMA (fastest)
# --model phi3:mini (2.3GB) - Microsoft Phi-3 (balanced)
# --language English - Default language for AI responses
# --language Spanish - Responses in Spanish (Espaรฑol)
# --language French - Responses in French (Franรงais)
# --language German - Responses in German (Deutsch)
# --language Chinese - Responses in Chinese (ไธญๆ)
# --language Japanese - Responses in Japanese (ๆฅๆฌ่ช)
# --language Portuguese - Responses in Portuguese (Portuguรชs)
# --language Russian - Responses in Russian (ะ ัััะบะธะน)
# --language Italian - Responses in Italian (Italiano)
# --platform discord - Discord webhooks [default]
# --platform slack - Slack incoming webhooks
# --platform teams - Microsoft Teams connectors
# --platform telegram - Telegram bot API
# --platform mattermost - Mattermost incoming webhooks
# --platform rocketchat - Rocket.Chat integrations
# --platform generic - Generic JSON webhook endpoint
# ... and many more languages supportedBenefits:
- โ No Docker dependency (lighter footprint)
- โ Better performance (no container overhead)
- โ Direct system integration with systemd
- โ Lower resource usage (~500MB vs ~2GB with Docker)
- โ Multiple AI model options (Gemma 3n [default], Qwen 3, LLaMA 3.2, Phi-3)
- โ Multilingual support (English, Spanish, French, German, Chinese, Japanese, and more)
- โ Optional AI model download (5.6GB Gemma 3n or smaller alternatives)
For a demo using pre-installed Docker [Desktop] go to the bottom section "Docker Quickstart" using a docker-compose.yml
curl -fsSL https://raw.githubusercontent.com/Hilo-Inc/robolog/main/install.sh | sudo bashBenefits:
- โ Consistent environment across systems
- โ Easy to containerize and scale
- โ Isolated from host system
- โ Manual configuration for model and language preferences
# Clone the repository
git clone https://github.com/Hilo-Inc/robolog.git
cd robolog
# Choose your installation method:
# Native (recommended):
chmod +x install-native.sh
sudo ./install-native.sh
# OR Docker:
chmod +x install.sh
sudo ./install.sh
# Configure your Discord webhook
robolog config
# Start the service
robolog start# Clone and setup
git clone https://github.com/Hilo-Inc/robolog.git
cd robolog
# Setup development environment
make dev-setup
# Start services
make start
# Test the system
make test-errors| Feature | Native Installation | Docker Installation |
|---|---|---|
| Dependencies | Node.js, Fluent Bit, Ollama | Docker, Docker Compose |
| Resource Usage | ~500MB RAM | ~2GB RAM |
| Performance | Direct execution | Container overhead |
| System Integration | Full systemd integration | Limited integration |
| Isolation | Shared with host | Containerized |
| Updates | Component-based | Image-based |
| AI Model Options | Interactive selection | Manual configuration |
| Language Support | Interactive selection | Manual configuration |
| Installation Time | 2-15 min (depends on model) | 5-20 min |
| Best For | Production servers, VPS | Development, K8s |
After installation, use these commands to manage Robolog:
# Service control
robolog start # Start all services
robolog stop # Stop all services
robolog restart # Restart all services
robolog status # Show service status
# Monitoring and testing
robolog logs # View logs from all services
robolog test-errors # Generate realistic test errors
robolog health # Check system health
# Configuration
robolog config # Edit configuration file
robolog update # Update to latest version
robolog uninstall # Completely remove Robolog
# Model management
robolog model list # List available AI models
robolog model pull gemma3n:e2b # Download a specific model
# ๐ Configuration includes:
# - Webhook URL and platform selection (Discord, Slack, Teams, Telegram, Mattermost, Rocket.Chat, Generic)
# - AI model selection (gemma3n:e2b [default], qwen3:8b, llama3.2:1b, phi3:mini)
# - Language preference (English, Spanish, French, German, Chinese, Japanese, etc.)
# - Polling interval and other settingsEdit the configuration file:
robolog configAdd your webhook URL and configure platform:
# Webhook Platform Configuration
WEBHOOK_PLATFORM=discord # Options: discord, slack, teams, telegram, mattermost, rocketchat, generic
WEBHOOK_URL=https://discord.com/api/webhooks/YOUR_WEBHOOK_ID/YOUR_WEBHOOK_TOKEN
# Platform-specific examples:
# Discord: https://discord.com/api/webhooks/YOUR_WEBHOOK_ID/YOUR_WEBHOOK_TOKEN
# Slack: https://hooks.slack.com/services/YOUR/SLACK/WEBHOOK
# Teams: https://outlook.office.com/webhook/YOUR_TEAMS_WEBHOOK_URL
# Telegram: https://api.telegram.org/bot<TOKEN>/sendMessage?chat_id=<CHAT_ID>
# Mattermost: https://your-mattermost.com/hooks/YOUR_WEBHOOK_ID
# Rocket.Chat: https://your-rocketchat.com/hooks/YOUR_WEBHOOK_ID
# Generic: Any HTTP endpoint that accepts JSON POST requests
# Set your preferred language for AI responses
LANGUAGE=English # Options: English, Spanish, French, German, Chinese, Japanese, Portuguese, Russian, Italian, etc.
# AI model selection (Gemma 3n is recommended for best quality)
MODEL_NAME=gemma3n:e2b # Options: gemma3n:e2b [default], qwen3:8b, llama3.2:1b, phi3:mini
# Note: Gemma models are subject to Google's Terms of Use: https://ai.google.dev/gemma/termsGenerate realistic test errors to verify the system:
robolog test-errorsThis creates:
- Nginx errors (502 Bad Gateway)
- System errors (disk space critical)
- Database errors (connection failures)
- Memory warnings (high usage alerts)
Check your webhook platform within 60 seconds for the AI-powered analysis in your configured language!
- ๐ค AI-Powered Analysis: Uses Ollama with multiple model options (Gemma 3n [default], Qwen 3, LLaMA 3.2, Phi-3)
- ๐ Multilingual Support: Receive notifications in your preferred language (English, Spanish, French, German, Chinese, Japanese, and more)
- ๐ฑ Multi-Platform Webhooks: Supports Discord, Slack, Microsoft Teams, Telegram, Mattermost, Rocket.Chat, and generic webhooks
- ๐ Multi-Level Filtering: Automatically categorizes by severity (CRITICAL, ERROR, WARNING)
- ๐๏ธ Multi-Application Support: Monitors nginx, system, database, and application logs
- โก Real-time Processing: Processes logs as they're generated
- ๐ Auto-restart: Resilient service management with systemd
- ๐ก๏ธ Resource Protection: Built-in safeguards against log file overflow
System Logs โ Fluent Bit โ Analyzer (Node.js) โ Ollama (AI) โ Webhook Platform
โ
/var/log/* โ systemd โ /opt/robolog/logs/all.log โ AI Analysis โ Notifications
Container Logs โ Docker Logging โ Fluent Bit โ Analyzer โ Ollama (AI) โ Webhook Platform
The fastest way to try Robolog is by running all components with Docker Compose:
-
Clone the repo:
git clone https://github.com/Hilo-Inc/robolog.git cd robolog -
Configure environment variables: Copy
.env.exampleto.envand edit your webhook details:cp .env.example .env # Edit .env with your WEBHOOK_URL and desired settings nano .env- Set
WEBHOOK_URLto your Discord/Slack/Teams/etc. webhook. - (Optional) Adjust
MODEL_NAME,LANGUAGE, andWEBHOOK_PLATFORMas needed.
- Set
-
Build all containers:
docker compose build
-
Start all services:
docker compose up -d
-
Test your setup: Trigger a test alert by going to https://localhost/ in your browser or:
docker compose exec analyzer node /app/test-errors.jsOr check your webhook platform for real notifications within a minute.
-
Stop the system:
docker compose down
Default exposed ports:
- Robolog (Nginx/web):
80 - Ollama (AI backend):
11434 - Fluent Bit (logs):
24224
Log files are stored in a shared Docker volume: You can inspect logs with:
docker compose exec fluent-bit tail -f /logs/all.logTips:
- To view logs for a specific container:
docker compose logs -f analyzer - To force re-pull latest base images:
docker compose pull && docker compose build --no-cache
WEBHOOK_URL=https://discord.com/api/webhooks/XXX/YYY
MODEL_NAME=gemma3n:e2b
LANGUAGE=English
WEBHOOK_PLATFORM=discord
- No manual installsโeverything is containerized!
- Easily run, update, or stop all Robolog components.
- Suitable for dev, demo, or cloud deployments.
Components:
- Fluent Bit: Collects and centralizes logs (system logs for native, container logs for Docker)
- Analyzer: Node.js service that filters, structures, and analyzes logs
- Ollama: Local AI model serving (Gemma 3n [default], Qwen 3, LLaMA 3.2, or Phi-3) for intelligent analysis
- Webhook Platform: Multi-platform notification delivery (Discord, Slack, Teams, Telegram, etc.) with structured summaries and recommendations
Note: Gemma models are open models from Google DeepMind. Usage is subject to the Gemma Terms of Use. View all Gemma models on Hugging Face.
- Ubuntu 20.04+ / Debian 11+
- CentOS 7+ / RHEL 7+
- Fedora 35+
- Arch Linux