Skip to content

ruoliu2/lightrag-rs

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

LightRAG-rs Documentation

Documentation Overview

Document Description
CORE_WORKFLOW.md Core workflow and cost-saving mechanisms
IMPLEMENTATION.md Implementation plan and architecture
TESTING.md Testing, benchmarking, and CI guide

Quick Reference

Installation

# Cargo.toml
[dependencies]
lightrag-rs = { git = "https://github.com/ruoliu2/lightrag-rs.git", features = ["openrouter"] }

LLM Configuration

LightRAG requires an LLM for keyword extraction. Configure OpenRouter:

# Required
export OPENROUTER_API_KEY="your-api-key"

# Optional (default: google/gemma-2-9b-it:free)
export OPENROUTER_MODEL="google/gemma-2-9b-it:free"

Available free models:

  • google/gemma-2-9b-it:free (default)
  • meta-llama/llama-3.2-3b-instruct:free
  • mistralai/mistral-7b-instruct:free

Basic Usage

use lightrag_rs::{LightRAG, QueryMode, Result};

#[tokio::main]
async fn main() -> Result<()> {
    // Create instance
    let mut rag = LightRAG::new("./my_storage").await?;

    // Configure LLM (required for most query modes)
    rag.with_openrouter()?;

    // Insert documents
    rag.insert("Alice and Bob worked on quantum computing at MIT.").await?;

    // Query (uses LLM for keyword extraction)
    let result = rag.query("Who worked on quantum computing?").await?;
    println!("{}", result.response);

    // Use specific mode
    let local = rag.query_with_mode("Who is Alice?", QueryMode::Local).await?;
    let global = rag.query_with_mode("What research is happening?", QueryMode::Global).await?;

    Ok(())
}

Query Modes

Mode Description Requires LLM
Hybrid Combined dual-level (default) Yes
Local Entity-focused retrieval Yes
Global Theme-focused retrieval Yes
Mix Knowledge graph + vector fusion Yes
Naive Simple vector similarity No

Running Examples

# Basic example
cargo run --example insert_and_query

# With custom document
cargo run --example insert_and_query -- path/to/document.txt

# With different model
OPENROUTER_MODEL="mistralai/mistral-7b-instruct:free" cargo run --example insert_and_query

Running Tests

# Unit tests
cargo test

# Integration tests
cargo test --test integration

# All tests with output
cargo test -- --nocapture

Architecture

LightRAG uses graphrag-core's KeywordExtractor for LLM-based dual-level keyword extraction, then searches local indices:

  1. Keyword Extraction (LLM): Query → high-level + low-level keywords
  2. Dual-Level Search: Keywords → search high-level (themes) + low-level (entities) indices
  3. Result Merging: Combine and deduplicate results
  4. Answer Generation (LLM): Context → generated response

Cost Savings

LightRAG achieves significant token reduction through:

  1. Keyword Extraction: ~20 keywords instead of full document embedding
  2. Dual-Level Retrieval: Targeted high/low level search
  3. Efficient Prompts: Optimized LLM prompts for extraction

See CORE_WORKFLOW.md for detailed analysis.

About

Simple and Fast Retrieval-Augmented Generation with dual-level retrieval - Rust implementation of LightRAG

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages