| Document | Description |
|---|---|
| CORE_WORKFLOW.md | Core workflow and cost-saving mechanisms |
| IMPLEMENTATION.md | Implementation plan and architecture |
| TESTING.md | Testing, benchmarking, and CI guide |
# Cargo.toml
[dependencies]
lightrag-rs = { git = "https://github.com/ruoliu2/lightrag-rs.git", features = ["openrouter"] }LightRAG requires an LLM for keyword extraction. Configure OpenRouter:
# Required
export OPENROUTER_API_KEY="your-api-key"
# Optional (default: google/gemma-2-9b-it:free)
export OPENROUTER_MODEL="google/gemma-2-9b-it:free"Available free models:
google/gemma-2-9b-it:free(default)meta-llama/llama-3.2-3b-instruct:freemistralai/mistral-7b-instruct:free
use lightrag_rs::{LightRAG, QueryMode, Result};
#[tokio::main]
async fn main() -> Result<()> {
// Create instance
let mut rag = LightRAG::new("./my_storage").await?;
// Configure LLM (required for most query modes)
rag.with_openrouter()?;
// Insert documents
rag.insert("Alice and Bob worked on quantum computing at MIT.").await?;
// Query (uses LLM for keyword extraction)
let result = rag.query("Who worked on quantum computing?").await?;
println!("{}", result.response);
// Use specific mode
let local = rag.query_with_mode("Who is Alice?", QueryMode::Local).await?;
let global = rag.query_with_mode("What research is happening?", QueryMode::Global).await?;
Ok(())
}| Mode | Description | Requires LLM |
|---|---|---|
Hybrid |
Combined dual-level (default) | Yes |
Local |
Entity-focused retrieval | Yes |
Global |
Theme-focused retrieval | Yes |
Mix |
Knowledge graph + vector fusion | Yes |
Naive |
Simple vector similarity | No |
# Basic example
cargo run --example insert_and_query
# With custom document
cargo run --example insert_and_query -- path/to/document.txt
# With different model
OPENROUTER_MODEL="mistralai/mistral-7b-instruct:free" cargo run --example insert_and_query# Unit tests
cargo test
# Integration tests
cargo test --test integration
# All tests with output
cargo test -- --nocaptureLightRAG uses graphrag-core's KeywordExtractor for LLM-based dual-level keyword extraction, then searches local indices:
- Keyword Extraction (LLM): Query → high-level + low-level keywords
- Dual-Level Search: Keywords → search high-level (themes) + low-level (entities) indices
- Result Merging: Combine and deduplicate results
- Answer Generation (LLM): Context → generated response
LightRAG achieves significant token reduction through:
- Keyword Extraction: ~20 keywords instead of full document embedding
- Dual-Level Retrieval: Targeted high/low level search
- Efficient Prompts: Optimized LLM prompts for extraction
See CORE_WORKFLOW.md for detailed analysis.