Migration guide: 1.x → 2.x
VoltAgent 2.x aligns the framework with AI SDK v6 and adds new features. There are no breaking changes in VoltAgent APIs. If you only use VoltAgent APIs, follow the steps below. If your app calls AI SDK functions directly, also review the upstream AI SDK v6 migration guide.
If you are still on 0.1.x, scroll down to the Migration guide: 0.1.x → 1.x section first, then come back here for the 1.x → 2.x upgrade.
Step 1. Update packages
1.1 Use the Volt CLI to update VoltAgent packages (recommended)
If you already have the Volt CLI installed, use:
npm run volt update
This command updates only @voltagent/* dependencies. You still need to align ai and @ai-sdk/* packages in the next step.
If you do not have the CLI yet, install it and add a script:
- Automatic (CLI)
- Manual
npx @voltagent/cli init
This command installs @voltagent/cli, adds the volt script, and creates the .voltagent folder in your project.
npm install --save-dev @voltagent/cli
"scripts": {
"volt": "volt"
}
Then run:
npm run volt update
1.2 Align AI SDK packages
If you ran npm run volt update, you can skip the @voltagent/* line below. Otherwise, update both VoltAgent and AI SDK packages:
pnpm add @voltagent/core@latest @voltagent/server-hono@latest @voltagent/libsql@latest @voltagent/logger@latest
pnpm add ai@^6 @ai-sdk/openai@^3 @ai-sdk/provider@^3 @ai-sdk/provider-utils@^4
Notes:
- If you use other providers, upgrade them to
@ai-sdk/*@^3(e.g.,@ai-sdk/anthropic,@ai-sdk/google,@ai-sdk/azure). - If you use
useChator other UI helpers, upgrade@ai-sdk/reactto^3. - If you are in a monorepo, update all
@voltagent/*packages to the same major version.
Step 2. Update custom tools (only if you use advanced tool hooks)
2.1 Tool output mapping signature change
If you use toModelOutput, it now receives { output }:
toModelOutput: ({ output }) => ({ type: "text", value: output }),
2.2 Tool execution options type rename (if you type it)
If you type the second execute parameter, use:
import type { ToolExecutionOptions } from "@ai-sdk/provider-utils";
Step 3. Structured output (if you use generateObject/streamObject)
VoltAgent 2.x deprecates generateObject and streamObject. Migrate to generateText/streamText with Output.object.
Before (1.x):
import { z } from "zod";
const schema = z.object({
name: z.string(),
age: z.number(),
});
const result = await agent.generateObject("Create a profile", schema);
console.log(result.object);
const stream = await agent.streamObject("Create a profile", schema);
for await (const partial of stream.partialObjectStream) {
console.log(partial);
}
After (2.x):
import { Output } from "ai";
import { z } from "zod";
const schema = z.object({
name: z.string(),
age: z.number(),
});
const result = await agent.generateText("Create a profile", {
output: Output.object({ schema }),
});
console.log(result.output);
const stream = await agent.streamText("Create a profile", {
output: Output.object({ schema }),
});
for await (const partial of stream.partialOutputStream ?? []) {
console.log(partial);
}
Step 4. Tests (if you use AI SDK mocks directly)
Update V2 mocks to V3 mocks:
import { MockLanguageModelV3 } from "ai/test";
Migration guide: 0.1.x → 1.x
Welcome to VoltAgent 1.x! This release brings the architectural improvements you've been asking for - native ai-sdk integration, truly modular components, and production-ready observability. Your agents are about to get a serious upgrade.
This guide is built for real-world migrations. Copy-paste the commands, follow the checklists, ship your update. No fluff, just the changes you need to know.
Need help? Hit a snag during migration? We've got you covered:
- Open an issue on GitHub - we're tracking migration experiences closely
- Join our Discord for real-time help from the community and core team
Here's what we'll cover:
- What changed and why (high-level rationale)
- Quick migration steps (copy-paste friendly)
- Detailed changes (API-by-API, with examples)
Overview: What changed and why
VoltAgent 1.x is a complete architectural refinement. We stripped away unnecessary abstractions, embraced native ai-sdk integration, and made everything pluggable:
- Native ai-sdk integration: The custom LLM provider layer and
@voltagent/vercel-aiare removed. Apps pass ai-sdk models directly (works with any ai-sdk provider). - Modular server: The built-in HTTP server is removed from core. Use pluggable providers (recommended:
@voltagent/server-hono). - Memory V2: A clean adapter-based architecture for storage/embeddings/vector search and structured working memory.
- Observability (OpenTelemetry): Legacy telemetry exporter is removed. Observability now uses OpenTelemetry with optional span/log processors and storage.
- Developer ergonomics: Clear peer dependency on
ai, improved logger support in SSR/Edge (viaglobalThis), and convenience exports.
Benefits:
- Smaller surface area in core, better portability (Node/Edge/Workers).
- First-class ai-sdk support and predictable results/streams.
- Composable memory: scale from in-memory to LibSQL/PostgreSQL/Supabase, plus semantic search.
- Standardized observability (OTel) with optional web socket streaming/logging.
Step 1. Update Packages (@1)
Uninstall legacy provider/UI packages and install the new modular server + memory packages. Also add the base ai library and a provider.
Uninstall (legacy):
npm uninstall @voltagent/vercel-ai @voltagent/vercel-ui
# yarn remove @voltagent/vercel-ai @voltagent/vercel-ui
# pnpm remove @voltagent/vercel-ai @voltagent/vercel-ui
Upgrade/install (required):
npm install @voltagent/core@latest @voltagent/server-hono@latest @voltagent/libsql@latest @voltagent/logger@latest ai
# yarn add @voltagent/core@latest @voltagent/server-hono@latest @voltagent/libsql@latest @voltagent/logger@latest ai@latest
# pnpm add @voltagent/core@latest @voltagent/server-hono@latest @voltagent/libsql@latest @voltagent/logger@latest ai@latest
ai: Base Vercel AI SDK library used by VoltAgent 1.x (peer of@voltagent/core)@ai-sdk/openai: Example provider; choose any compatible provider (@ai-sdk/anthropic,@ai-sdk/google, etc.)@voltagent/server-hono: New pluggable HTTP server provider (replaces built-in server)@voltagent/libsql: LibSQL/Turso memory adapter (replaces built-in LibSQL in core)
Optional adapters:
@voltagent/postgres: PostgreSQL storage adapter@voltagent/supabase: Supabase storage adapter
Note: @voltagent/[email protected] declares ai@^5 as a peer dependency. Your application must install ai. If you want to import ai-sdk providers directly, install those packages too. If ai is missing, you will get a module resolution error at runtime when calling generation methods.
Node runtime requirement:
- The repo targets Node >= 20. Please ensure your deployment matches.
Step 2. Update Code
Update your code as follows (highlighted lines are new in 1.x). Note: logger usage isn't new; keep your existing logger setup or use the example below.
// REMOVE (0.1.x):
// import { VercelAIProvider } from "@voltagent/vercel-ai";
import { VoltAgent, Agent, Memory } from "@voltagent/core";
import { LibSQLMemoryAdapter } from "@voltagent/libsql";
import { honoServer } from "@voltagent/server-hono";
import { createPinoLogger } from "@voltagent/logger";
const logger = createPinoLogger({ name: "my-app", level: "info" });
const memory = new Memory({
storage: new LibSQLMemoryAdapter({ url: "file:./.voltagent/memory.db" }),
});
const agent = new Agent({
name: "my-app",
instructions: "Helpful assistant",
// REMOVE (0.1.x): llm: new VercelAIProvider(),
model: "openai/gpt-4o-mini",
memory,
});
new VoltAgent({
agents: { agent },
server: honoServer(),
logger,
});
Remove in your existing code (0.1.x):
import { VercelAIProvider } from "@voltagent/vercel-ai";llm: new VercelAIProvider(),- Built-in server options on
VoltAgent(e.g.,port,enableSwaggerUI,autoStart)
Add to your app (1.x):
import { Memory } from "@voltagent/core";import { LibSQLMemoryAdapter } from "@voltagent/libsql";import { honoServer } from "@voltagent/server-hono";- Configure
memory: new Memory({ storage: new LibSQLMemoryAdapter({ url }) }) - Pass
server: honoServer()tonew VoltAgent({...})
Summary of changes:
- Delete:
VercelAIProviderimport andllm: new VercelAIProvider() - Delete: Built-in server options (
port,enableSwaggerUI,autoStart, custom endpoints on core) - Add:
Memory+LibSQLMemoryAdapterfor persistent LibSQL/Turso-backed memory - Add:
honoServer()as the server provider - Keep:
model: "openai/..."(or any ai-sdk provider), or usemodel: "provider/model"
Custom routes and auth (server):
new VoltAgent({
agents: { agent },
server: honoServer({
port: 3141, // default
enableSwaggerUI: true, // optional
configureApp: (app) => {
app.get("/api/health", (c) => c.json({ status: "ok" }));
},
// Auth (optional)
// authNext: {
// provider: jwtAuth({ secret: process.env.JWT_SECRET! }),
// publicRoutes: ["GET /health", "GET /metrics"],
// },
}),
});
Detailed Changes
Observability (OpenTelemetry)
What changed:
- Legacy
telemetry/*and the telemetry exporter were removed from core. - Observability now uses OpenTelemetry and can be enabled for production with only environment variables. No code changes are required.
New APIs (from @voltagent/core):
VoltAgentObservability(created automatically unless you pass your own)- Optional processors:
LocalStorageSpanProcessor,WebSocketSpanProcessor,WebSocketLogProcessor - In-memory adapter and OTel helpers (
Span,SpanKind,SpanStatusCode, etc.)
Minimal usage (recommended):
- Add keys to your
.env:
# .env
VOLTAGENT_PUBLIC_KEY=pk_...
VOLTAGENT_SECRET_KEY=sk_...
- Run your app normally. Remote export auto-enables when valid keys are present. Local, real-time debugging via the VoltOps Console stays available either way.
Notes:
- If you previously used the deprecated
telemetryExporteror wired observability viaVoltOpsClient, remove that code. The.envkeys are sufficient. - When keys are missing/invalid, VoltAgent continues with local debugging only (no remote export).
Advanced (optional):
- Provide a custom
VoltAgentObservabilityto tune sampling/batching or override defaults. This is not required for typical setups.
Remove llm provider and @voltagent/vercel-ai
VoltAgent no longer uses a custom provider wrapper. The @voltagent/vercel-ai package has been removed, and the llm prop on Agent is no longer supported. VoltAgent now integrates directly with the Vercel AI SDK (ai) and is fully compatible with all ai-sdk providers.
What changed
- Removed:
@voltagent/vercel-aipackage andVercelAIProviderusage - Removed:
llmprop onAgent - Kept:
modelprop onAgent(now pass an ai-sdkLanguageModeldirectly) - Call settings: pass ai-sdk call settings (e.g.,
temperature,maxOutputTokens) in method options as before
Before (0.1.x)
import { Agent } from "@voltagent/core";
import { VercelAIProvider } from "@voltagent/vercel-ai";
const agent = new Agent({
name: "my-app",
instructions: "Helpful assistant",
llm: new VercelAIProvider(),
model: "openai/gpt-4o-mini",
});
After (1.x)
import { Agent } from "@voltagent/core";
const agent = new Agent({
name: "my-app",
instructions: "Helpful assistant",
// VoltAgent uses ai-sdk directly - just provide a model
model: "openai/gpt-4o-mini",
});
You can swap openai/... for any provider string, e.g. "anthropic/claude-3-5-sonnet", "google/gemini-1.5-pro", etc.
Or use a model string:
import { Agent } from "@voltagent/core";
const agent = new Agent({
name: "my-app",
instructions: "Helpful assistant",
model: "openai/gpt-4o-mini",
});
Package changes
- Uninstall legacy provider:
- npm:
npm uninstall @voltagent/vercel-ai - yarn:
yarn remove @voltagent/vercel-ai - pnpm:
pnpm remove @voltagent/vercel-ai
- npm:
- Install the ai base library:
- npm:
npm install ai - yarn:
yarn add ai - pnpm:
pnpm add ai
- npm:
- Install provider packages only if you plan to import them:
- npm:
npm install @ai-sdk/openai - yarn:
yarn add @ai-sdk/openai - pnpm:
pnpm add @ai-sdk/openai
- npm:
Note:
@voltagent/[email protected]declaresai@^5as a peer dependency. Your application must installai. If you want to import ai-sdk providers directly, install those packages too. Ifaiis missing, you will get a module resolution error at runtime when calling generation methods.
Code changes checklist
- Remove
import { VercelAIProvider } from "@voltagent/vercel-ai"from all files - Remove
llm: new VercelAIProvider()fromAgentconfiguration - Keep
model: ...and either import the appropriate ai-sdk provider or use aprovider/modelstring - Move
provider: { ... }call settings to top-level options (e.g.,temperature,maxOutputTokens,topP,stopSequences) - Put provider-specific knobs under
providerOptionsif needed - Remove deprecated
memoryOptionsfrom Agent constructor; configure limits on yourMemoryinstance (e.g.,storageLimit) or adapter
Example call settings (unchanged style):
const res = await agent.generateText("Hello", {
temperature: 0.3,
maxOutputTokens: 256,
providerOptions: {
someProviderSpecificOption: {
foo: "bar",
},
},
});
Common errors after upgrade
- Type error: "Object literal may only specify known properties, and 'llm' does not exist..." → Remove the
llmprop - Module not found:
@voltagent/vercel-ai→ Uninstall the package and remove imports
Environment variables
Your existing provider keys still apply (e.g., OPENAI_API_KEY, ANTHROPIC_API_KEY, etc.). Configure them as required by ai-sdk providers.
Change: Default memory is now InMemory; new Memory class
VoltAgent 1.x introduces a new Memory class that unifies conversation history, optional vector search, and working-memory features. By default, if you do not configure memory, the agent uses in-memory storage.
What changed
- Default memory: In-memory storage by default (no persistence)
- New API:
memory: new Memory({ storage: <Adapter> }) - Legacy
LibSQLStorageusage is replaced withLibSQLMemoryAdapteras a storage adapter - Optional adapters:
InMemoryStorageAdapter(core),PostgreSQLMemoryAdapter(@voltagent/postgres),SupabaseMemoryAdapter(@voltagent/supabase),LibSQLMemoryAdapter(@voltagent/libsql) - New capabilities: Embedding-powered vector search and working-memory support (optional)
Before (0.1.x)
import { Agent } from "@voltagent/core";
import { VercelAIProvider } from "@voltagent/vercel-ai";
import { LibSQLStorage } from "@voltagent/libsql";
const agent = new Agent({
name: "my-agent",
instructions: "A helpful assistant that answers questions without using tools",
llm: new VercelAIProvider(),
model: "openai/gpt-4o-mini",
// Persistent memory
memory: new LibSQLStorage({
url: "file:./.voltagent/memory.db",
}),
});
After (1.x)
import { Agent, Memory } from "@voltagent/core";
import { LibSQLMemoryAdapter } from "@voltagent/libsql";
const agent = new Agent({
name: "my-agent",
instructions: "A helpful assistant that answers questions without using tools",
model: "openai/gpt-4o-mini",
// Optional: persistent memory (remove to use default in-memory)
memory: new Memory({
storage: new LibSQLMemoryAdapter({
url: "file:./.voltagent/memory.db",
}),
}),
});