A portable, single-file Lua library for creating AI agents with structured inputs/outputs and dynamic prompts. Inspired by Pydantic AI, designed to be a lightweight, functional drop-in dependency for building agents in Lua.
- Structured Inputs/Outputs: JSON schema validation for reliable, type-safe agent outputs using tool-based approach
- Streaming Support: Receive incremental responses with real-time callbacks for content and tool calls
- Streaming + Structured Output: Get validated structured responses while streaming (unique provider-independent approach)
- Dynamic Prompts: System prompts can be static strings or functions that adapt based on runtime context
- Tool/Function Calling: Define tools that agents can call to interact with external systems
- OpenAI-Compatible API: Works with OpenAI, Ollama, Together AI, and other compatible providers
- Dependency Injection: Type-safe pattern for passing runtime dependencies to agents and tools
- Portable: Entire library in a single Lua file, easy to vendor or distribute
- Low Complexity: Clean, readable code with comprehensive tests
Simply copy luagent.lua into your project. The library is self-contained in a single file.
For full functionality, install these optional dependencies:
# JSON library (pick one, dkjson recommended)
luarocks install dkjson
# HTTP library (pick one)
luarocks install lua-requests
# OR
luarocks install luasocket luasecThe library will work with any of these JSON/HTTP libraries, or fall back to basic implementations if none are available.
local luagent = require('luagent')
-- Create a simple agent
local agent = luagent.Agent.new({
model = "gpt-4o-mini",
system_prompt = "You are a helpful assistant."
})
-- Run it
local result = agent:run("What is the capital of France?")
print(result.data) -- "The capital of France is Paris."In the examples directory, see examples.lua for basic examples, and weather_agent.lua for a weather agent demo.
eval "$(luarocks path)" && lua examples/examples.luaA complete weather agent that demonstrates tool chaining, dependency injection, and structured outputs. This example mirrors the Pydantic AI weather agent and shows how to build a multi-tool agent that works with local LLMs.
# Run the weather agent with your local llama.cpp server
eval "$(luarocks path)" && lua examples/weather_agent.luaSee examples/README.md for detailed documentation.
Get type-safe, validated responses using JSON schemas:
local agent = luagent.Agent.new({
model = "gpt-4o-mini",
system_prompt = "You analyze sentiment of text.",
output_schema = {
type = "object",
properties = {
sentiment = { type = "string", enum = {"positive", "negative", "neutral"} },
confidence = { type = "number" },
reasoning = { type = "string" }
},
required = {"sentiment", "confidence", "reasoning"}
}
})
local result = agent:run("I love this product!")
-- Access structured data
print(result.data.sentiment) -- "positive"
print(result.data.confidence) -- 0.95
print(result.data.reasoning) -- "The phrase 'I love' indicates strong positive sentiment"How it works: luagent uses a tool-based approach for structured outputs, inspired by Pydantic AI. When you provide an output_schema, the library automatically registers a special final_answer tool with your schema as its parameters. The model calls this tool when ready to return structured data.
Benefits:
- ✅ Streaming compatible: Tool calls can be streamed, so structured outputs work with
stream=true - ✅ Provider-independent: Works with any model that supports tool calling (OpenAI, Ollama, Together AI, etc.)
- ✅ Mix with regular tools: Use other tools alongside structured output in the same agent
Adapt agent behavior based on runtime context:
local agent = luagent.Agent.new({
model = "gpt-4o-mini",
system_prompt = function(ctx)
return string.format(
"You are a %s assistant with expertise in %s.",
ctx.deps.personality,
ctx.deps.expertise
)
end
})
-- Different behavior based on dependencies
local result = agent:run("Explain quantum computing", {
deps = { personality = "enthusiastic", expertise = "physics" }
})Give your agent abilities by defining tools:
local agent = luagent.Agent.new({
model = "gpt-4o-mini",
system_prompt = "You are a weather assistant.",
tools = {
get_weather = {
description = "Get the current weather for a city",
parameters = {
type = "object",
properties = {
city = { type = "string", description = "The city name" }
},
required = {"city"}
},
func = function(ctx, args)
-- Your weather API logic here
return {
temperature = 72,
condition = "sunny",
city = args.city
}
end
}
}
})
local result = agent:run("What's the weather in San Francisco?")
-- Agent automatically calls the get_weather tool and uses the resultReceive incremental responses as they're generated:
local agent = luagent.Agent.new({
model = "gpt-4o-mini",
system_prompt = "You are a helpful assistant."
})
-- Stream the response
local result = agent:run("Write a haiku about Lua", {
stream = true,
on_chunk = function(chunk_type, data)
if chunk_type == "content" then
-- Print each piece of text as it arrives
io.write(data.content)
io.flush()
elseif chunk_type == "tool_call_start" then
print("\n[Tool call: " .. data.id .. "]")
elseif chunk_type == "tool_call_delta" then
-- Show incremental tool arguments
io.write(data.arguments)
io.flush()
elseif chunk_type == "tool_call_end" then
print("\n[Tool completed: " .. data.tool_call["function"].name .. "]")
end
end
})
-- result.data contains the complete accumulated response
print("\n\nComplete response:", result.data)Streaming works with tool calling, structured outputs, and the entire agent loop. See examples/streaming_example.lua for more examples, including streaming with structured outputs.
Pass runtime dependencies to tools safely:
local agent = luagent.Agent.new({
model = "gpt-4o-mini",
tools = {
query_database = {
description = "Query the database",
parameters = { type = "object", properties = {} },
func = function(ctx, args)
-- Access dependencies through context
local db = ctx.deps.database
local user = ctx.deps.current_user
-- Use them in your logic
return db:query("SELECT * FROM orders WHERE user_id = ?", user.id)
end
}
}
})
-- Inject dependencies at runtime
local result = agent:run("Show my recent orders", {
deps = {
database = my_db_connection,
current_user = { id = 123, name = "Alice" }
}
})Maintain context across multiple turns:
local agent = luagent.Agent.new({
model = "gpt-4o-mini",
system_prompt = "You are a helpful tutor."
})
-- First message
local result1 = agent:run("What is a prime number?")
-- Continue conversation with history
local result2 = agent:run("Can you give me an example?", {
message_history = {
{ role = "user", content = "What is a prime number?" },
{ role = "assistant", content = result1.data }
}
})Create a new agent.
Parameters:
config.model(string, required): Model identifier (e.g., "gpt-4", "gpt-4o-mini")config.system_prompt(string|function, optional): Static string or function returning promptconfig.output_schema(table, optional): JSON schema for structured output validationconfig.tools(table, optional): Map of tool name to tool configurationconfig.base_url(string, optional): API base URL (default: "https://api.openai.com/v1")config.api_key(string, optional): API key (default:OPENAI_API_KEYenv var)config.temperature(number, optional): Sampling temperatureconfig.max_tokens(number, optional): Maximum tokens in responseconfig.http_client(table, optional): Custom HTTP client (for testing)
Returns: Agent instance
Run the agent with a prompt.
Parameters:
prompt(string, required): The user's input messageoptions.deps(table, optional): Dependencies to inject into contextoptions.message_history(table, optional): Previous conversation messagesoptions.max_iterations(number, optional): Max tool calling iterations (default: 10)
Returns: Result table with:
data: The response (string or structured data if output_schema is set)messages: Full conversation history including tool callsraw_response: Raw API response
Passed to dynamic prompts and tool functions.
Properties:
deps: Dependencies injected viarun()optionsmessages: Conversation message history
Tools are defined in the tools table passed to Agent.new():
tools = {
tool_name = {
description = "What the tool does",
parameters = {
-- JSON schema for tool parameters
type = "object",
properties = { ... }
},
func = function(ctx, args)
-- ctx: RunContext
-- args: Validated parameters
return result -- Will be JSON-encoded
end
}
}luagent works with any OpenAI-compatible API:
local agent = luagent.Agent.new({
model = "gpt-4o-mini",
api_key = os.getenv("OPENAI_API_KEY")
})local agent = luagent.Agent.new({
model = "llama2",
base_url = "http://localhost:11434/v1",
api_key = "not-needed" -- Ollama doesn't require auth
})local agent = luagent.Agent.new({
model = "meta-llama/Llama-3-70b-chat-hf",
base_url = "https://api.together.xyz/v1",
api_key = os.getenv("TOGETHER_API_KEY")
})Any service that implements the OpenAI Chat Completions API should work. Just set the appropriate base_url and api_key.
Run the test suite:
# Install test dependencies
luarocks install dkjson luasec luasocket
# Run tests
eval "$(luarocks path)" && lua test_luagent.luaAll tests should pass:
==================================================
Test Results:
Passed: 32
Failed: 0
Total: 32
==================================================
luagent is designed to be simple and hackable:
- JSON Schema Validator: Validates structured outputs against schemas
- RunContext: Carries dependencies and state through the execution
- Agent: Orchestrates the conversation loop with the LLM
- Tool Execution: Handles function calling with error handling
- Tool-Based Structured Output: Uses function calling for provider-independent structured outputs
- HTTP/JSON Abstraction: Works with multiple library implementations
The entire implementation is ~900 lines of Lua code in a single file.
- Portable: One file, minimal dependencies, works anywhere Lua runs
- Simple: Clear code over clever tricks, easy to understand and modify
- Functional: Covers the 80% use case without feature bloat
- Compatible: Works with OpenAI and compatible APIs out of the box
- Tested: Comprehensive test coverage for reliability
Current limitations (may be addressed in future versions):
- No async/concurrent execution (Lua limitation)
- Basic JSON schema validation (subset of full spec)
- No built-in retry/rate limiting
- No conversation state management beyond manual history passing
This is a single-file library by design. If you want to add features:
- Keep everything in
luagent.lua - Add tests to
test_luagent.lua - Update examples in
examples/ - Maintain backwards compatibility
- Keep it simple and readable
MIT License - see LICENSE file for details
- Pydantic AI - The Python library that inspired this project
- OpenAI API Reference
- JSON Schema