Skip to content

MB-Hilo/robolog

ย 
ย 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 

History

71 Commits
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation

๐Ÿš€ Robolog - AI-Powered Log Monitoring

Intelligent log monitoring with AI-powered analysis and Discord notifications

Robolog automatically monitors your Linux system logs, detects critical issues, and sends intelligent summaries to Discord (or other webhook service) using AI analysis powered by Ollama and Gemma 3n (default).

๐Ÿ“ฆ Quick Installation (Linux)

๐Ÿš€ Native Installation (Recommended - No Docker Required)

includes nextjs dashboard with nginx requirement

# Install with the optional Next.js web dashboard (requires Nginx)
curl -fsSL https://raw.githubusercontent.com/Hilo-Inc/robolog/main/install-native.sh | sudo bash -s -- --with-dashboard

Installation Options:

# Standard installation (prompts for AI model and language selection)
curl -fsSL https://raw.githubusercontent.com/Hilo-Inc/robolog/main/install-native.sh | sudo bash

# Skip AI model download (faster, download later)  
curl -fsSL https://raw.githubusercontent.com/Hilo-Inc/robolog/main/install-native.sh | sudo bash -s -- --skip-model

# Auto-download specific model with language preference
curl -fsSL https://raw.githubusercontent.com/Hilo-Inc/robolog/main/install-native.sh | sudo bash -s -- --yes --model gemma3n:e2b --language English --platform slack

# Available options:
# --model gemma3n:e2b  (5.6GB) - Google Gemma 3n [default]
# --model qwen3:8b     (5.2GB) - Alibaba Qwen 3 with thinking mode
# --model llama3.2:1b  (1.3GB) - Meta LLaMA (fastest)
# --model phi3:mini    (2.3GB) - Microsoft Phi-3 (balanced)
# --language English   - Default language for AI responses
# --language Spanish   - Responses in Spanish (Espaรฑol)
# --language French    - Responses in French (Franรงais)
# --language German    - Responses in German (Deutsch)
# --language Chinese   - Responses in Chinese (ไธญๆ–‡)
# --language Japanese  - Responses in Japanese (ๆ—ฅๆœฌ่ชž)
# --language Portuguese - Responses in Portuguese (Portuguรชs)
# --language Russian   - Responses in Russian (ะ ัƒััะบะธะน)
# --language Italian   - Responses in Italian (Italiano)
# --platform discord   - Discord webhooks [default]
# --platform slack     - Slack incoming webhooks
# --platform teams     - Microsoft Teams connectors
# --platform telegram  - Telegram bot API
# --platform mattermost - Mattermost incoming webhooks
# --platform rocketchat - Rocket.Chat integrations
# --platform generic   - Generic JSON webhook endpoint
# ... and many more languages supported

Benefits:

  • โœ… No Docker dependency (lighter footprint)
  • โœ… Better performance (no container overhead)
  • โœ… Direct system integration with systemd
  • โœ… Lower resource usage (~500MB vs ~2GB with Docker)
  • โœ… Multiple AI model options (Gemma 3n [default], Qwen 3, LLaMA 3.2, Phi-3)
  • โœ… Multilingual support (English, Spanish, French, German, Chinese, Japanese, and more)
  • โœ… Optional AI model download (5.6GB Gemma 3n or smaller alternatives)

๐Ÿณ Docker Installation

For a demo using pre-installed Docker [Desktop] go to the bottom section "Docker Quickstart" using a docker-compose.yml

curl -fsSL https://raw.githubusercontent.com/Hilo-Inc/robolog/main/install.sh | sudo bash

Benefits:

  • โœ… Consistent environment across systems
  • โœ… Easy to containerize and scale
  • โœ… Isolated from host system
  • โœ… Manual configuration for model and language preferences

Manual Installation

# Clone the repository
git clone https://github.com/Hilo-Inc/robolog.git
cd robolog

# Choose your installation method:
# Native (recommended):
chmod +x install-native.sh
sudo ./install-native.sh

# OR Docker:
chmod +x install.sh
sudo ./install.sh

# Configure your Discord webhook
robolog config

# Start the service
robolog start

Using Make (Development)

# Clone and setup
git clone https://github.com/Hilo-Inc/robolog.git
cd robolog

# Setup development environment
make dev-setup

# Start services
make start

# Test the system
make test-errors

๐Ÿ†š Installation Comparison

Feature Native Installation Docker Installation
Dependencies Node.js, Fluent Bit, Ollama Docker, Docker Compose
Resource Usage ~500MB RAM ~2GB RAM
Performance Direct execution Container overhead
System Integration Full systemd integration Limited integration
Isolation Shared with host Containerized
Updates Component-based Image-based
AI Model Options Interactive selection Manual configuration
Language Support Interactive selection Manual configuration
Installation Time 2-15 min (depends on model) 5-20 min
Best For Production servers, VPS Development, K8s

๐Ÿ› ๏ธ Management Commands

After installation, use these commands to manage Robolog:

# Service control
robolog start          # Start all services
robolog stop           # Stop all services
robolog restart        # Restart all services
robolog status         # Show service status

# Monitoring and testing
robolog logs           # View logs from all services
robolog test-errors    # Generate realistic test errors
robolog health         # Check system health

# Configuration
robolog config         # Edit configuration file
robolog update         # Update to latest version
robolog uninstall      # Completely remove Robolog

# Model management
robolog model list         # List available AI models
robolog model pull gemma3n:e2b   # Download a specific model

# ๐Ÿ“ Configuration includes:
# - Webhook URL and platform selection (Discord, Slack, Teams, Telegram, Mattermost, Rocket.Chat, Generic)
# - AI model selection (gemma3n:e2b [default], qwen3:8b, llama3.2:1b, phi3:mini)
# - Language preference (English, Spanish, French, German, Chinese, Japanese, etc.)
# - Polling interval and other settings

๐Ÿ”ง Configuration

Edit the configuration file:

robolog config

Add your webhook URL and configure platform:

# Webhook Platform Configuration
WEBHOOK_PLATFORM=discord  # Options: discord, slack, teams, telegram, mattermost, rocketchat, generic
WEBHOOK_URL=https://discord.com/api/webhooks/YOUR_WEBHOOK_ID/YOUR_WEBHOOK_TOKEN

# Platform-specific examples:
# Discord: https://discord.com/api/webhooks/YOUR_WEBHOOK_ID/YOUR_WEBHOOK_TOKEN
# Slack: https://hooks.slack.com/services/YOUR/SLACK/WEBHOOK
# Teams: https://outlook.office.com/webhook/YOUR_TEAMS_WEBHOOK_URL
# Telegram: https://api.telegram.org/bot<TOKEN>/sendMessage?chat_id=<CHAT_ID>
# Mattermost: https://your-mattermost.com/hooks/YOUR_WEBHOOK_ID
# Rocket.Chat: https://your-rocketchat.com/hooks/YOUR_WEBHOOK_ID
# Generic: Any HTTP endpoint that accepts JSON POST requests

# Set your preferred language for AI responses
LANGUAGE=English  # Options: English, Spanish, French, German, Chinese, Japanese, Portuguese, Russian, Italian, etc.

# AI model selection (Gemma 3n is recommended for best quality)
MODEL_NAME=gemma3n:e2b  # Options: gemma3n:e2b [default], qwen3:8b, llama3.2:1b, phi3:mini
# Note: Gemma models are subject to Google's Terms of Use: https://ai.google.dev/gemma/terms

๐Ÿงช Testing

Generate realistic test errors to verify the system:

robolog test-errors

This creates:

  • Nginx errors (502 Bad Gateway)
  • System errors (disk space critical)
  • Database errors (connection failures)
  • Memory warnings (high usage alerts)

Check your webhook platform within 60 seconds for the AI-powered analysis in your configured language!

๐Ÿ“Š Features

  • ๐Ÿค– AI-Powered Analysis: Uses Ollama with multiple model options (Gemma 3n [default], Qwen 3, LLaMA 3.2, Phi-3)
  • ๐ŸŒ Multilingual Support: Receive notifications in your preferred language (English, Spanish, French, German, Chinese, Japanese, and more)
  • ๐Ÿ“ฑ Multi-Platform Webhooks: Supports Discord, Slack, Microsoft Teams, Telegram, Mattermost, Rocket.Chat, and generic webhooks
  • ๐Ÿ” Multi-Level Filtering: Automatically categorizes by severity (CRITICAL, ERROR, WARNING)
  • ๐Ÿ—๏ธ Multi-Application Support: Monitors nginx, system, database, and application logs
  • โšก Real-time Processing: Processes logs as they're generated
  • ๐Ÿ”„ Auto-restart: Resilient service management with systemd
  • ๐Ÿ›ก๏ธ Resource Protection: Built-in safeguards against log file overflow

๐Ÿ—๏ธ Architecture

Native Installation

System Logs โ†’ Fluent Bit โ†’ Analyzer (Node.js) โ†’ Ollama (AI) โ†’ Webhook Platform
     โ†“
/var/log/* โ†’ systemd โ†’ /opt/robolog/logs/all.log โ†’ AI Analysis โ†’ Notifications

Docker Installation

Container Logs โ†’ Docker Logging โ†’ Fluent Bit โ†’ Analyzer โ†’ Ollama (AI) โ†’ Webhook Platform

๐Ÿณ Docker Compose Quick Start

The fastest way to try Robolog is by running all components with Docker Compose:

  1. Clone the repo:

    git clone https://github.com/Hilo-Inc/robolog.git
    cd robolog
  2. Configure environment variables: Copy .env.example to .env and edit your webhook details:

    cp .env.example .env
    # Edit .env with your WEBHOOK_URL and desired settings
    nano .env
    • Set WEBHOOK_URL to your Discord/Slack/Teams/etc. webhook.
    • (Optional) Adjust MODEL_NAME, LANGUAGE, and WEBHOOK_PLATFORM as needed.
  3. Build all containers:

    docker compose build
  4. Start all services:

    docker compose up -d
  5. Test your setup: Trigger a test alert by going to https://localhost/ in your browser or:

    docker compose exec analyzer node /app/test-errors.js

    Or check your webhook platform for real notifications within a minute.

  6. Stop the system:

    docker compose down

Default exposed ports:

  • Robolog (Nginx/web): 80
  • Ollama (AI backend): 11434
  • Fluent Bit (logs): 24224

Log files are stored in a shared Docker volume: You can inspect logs with:

docker compose exec fluent-bit tail -f /logs/all.log

Tips:

  • To view logs for a specific container: docker compose logs -f analyzer
  • To force re-pull latest base images: docker compose pull && docker compose build --no-cache

Example .env:

WEBHOOK_URL=https://discord.com/api/webhooks/XXX/YYY
MODEL_NAME=gemma3n:e2b
LANGUAGE=English
WEBHOOK_PLATFORM=discord

Why use Docker Compose?

  • No manual installsโ€”everything is containerized!
  • Easily run, update, or stop all Robolog components.
  • Suitable for dev, demo, or cloud deployments.

Components:

  • Fluent Bit: Collects and centralizes logs (system logs for native, container logs for Docker)
  • Analyzer: Node.js service that filters, structures, and analyzes logs
  • Ollama: Local AI model serving (Gemma 3n [default], Qwen 3, LLaMA 3.2, or Phi-3) for intelligent analysis
  • Webhook Platform: Multi-platform notification delivery (Discord, Slack, Teams, Telegram, etc.) with structured summaries and recommendations

Note: Gemma models are open models from Google DeepMind. Usage is subject to the Gemma Terms of Use. View all Gemma models on Hugging Face.

๐Ÿ”„ Supported Linux Distributions

  • Ubuntu 20.04+ / Debian 11+
  • CentOS 7+ / RHEL 7+
  • Fedora 35+
  • Arch Linux

About

AI powered log analysis

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • TypeScript 44.0%
  • Shell 31.5%
  • JavaScript 22.1%
  • Makefile 1.3%
  • Other 1.1%