Generating content with AI models
Genkit provides a unified interface for working with generative AI models from any supported provider. Configure a model plugin once, then call any model through the same API—making it easy to combine multiple models or swap one out as your app evolves.
Before you begin
Section titled “Before you begin”If you want to run the code examples on this page, first complete the steps in the Get started guide. All of the examples assume that you have already installed Genkit as a dependency in your project.
Loading and configuring model plugins
Section titled “Loading and configuring model plugins”Before you can use Genkit to start generating content, you need to load and configure a model plugin. If you’re coming from the Get started guide, you’ve already done this. Otherwise, see the Get started guide or the individual plugin’s documentation and follow the steps there before continuing.
The ai.generate() method
Section titled “The ai.generate() method”In Genkit, the primary interface through which you interact with generative AI
models is the ai.generate() method.
The simplest ai.generate() call specifies the model you want to use and a
text prompt:
import 'package:genkit/genkit.dart';import 'package:genkit_google_genai/genkit_google_genai.dart';
void main() async { final ai = Genkit(plugins: [googleAI()]);
final response = await ai.generate( model: googleAI.gemini('gemini-2.5-flash'), prompt: 'Invent a menu item for a restaurant with a pirate theme.', ); print(response.text);}When you run this brief example, it will print out the output of the ai.generate() call, which will usually be Markdown text.
System prompts
Section titled “System prompts”Some models support providing a system prompt, which gives the model instructions as to how you want it to respond to messages from the user. You can use the system prompt to specify characteristics such as a persona you want the model to adopt, the tone of its responses, and the format of its responses.
If the model you’re using supports system prompts, you can provide one through the model configuration:
final response = await ai.generate( model: googleAI.gemini('gemini-2.5-flash'), prompt: 'Invent a menu item for a pirate themed restaurant.', config: GeminiOptions( systemInstruction: 'You are a food industry marketing consultant.', ),);Model parameters
Section titled “Model parameters”The ai.generate() method takes a config parameter, through which
you can specify optional settings that control how the model generates content:
final response = await ai.generate( model: googleAI.gemini('gemini-2.5-flash'), prompt: 'Invent a menu item for a pirate themed restaurant.', config: GeminiOptions( maxOutputTokens: 500, stopSequences: ['<end>', '<fin>'], temperature: 0.5, topP: 0.4, topK: 50, ),);Structured output
Section titled “Structured output”When using generative AI as a component in your application, you often want output in a format other than plain text.
In Genkit, you can request structured output from a model by specifying an
outputSchema when you call ai.generate():
@Schema()abstract class $MenuItem { String get name; String get description; int get calories; List<String> get allergens;}
final response = await ai.generate( model: googleAI.gemini('gemini-2.5-flash'), prompt: 'Invent a menu item for a pirate themed restaurant.', outputSchema: MenuItem.$schema,);Genkit will:
- Augment the prompt with schema guidance.
- Validate the output against your schema.
- Provide a typed object in
response.output.
final menuItem = response.output;if (menuItem != null) { print('${menuItem.name} (${menuItem.calories} kcals): ${menuItem.description}');}Streaming
Section titled “Streaming”When generating large amounts of text, you can improve the experience for your users by presenting the output as it’s generated—streaming the output.
In Genkit, you can stream output using the ai.generateStream() method:
final stream = ai.generateStream( model: googleAI.gemini('gemini-2.5-flash'), prompt: 'Write a long story about a pirate.',);
await for (final chunk in stream) { print(chunk.text);}
final response = await stream.onResult;print('Full text: ${response.text}');Multimodal input
Section titled “Multimodal input”To provide a media prompt to a model that supports it, pass a list of parts to prompt:
final response = await ai.generate( model: googleAI.gemini('gemini-2.5-flash'), prompt: [ Part.media(url: 'https://example.com/photo.jpg'), Part.text('Compose a poem about this image.'), ],);Generating media
Section titled “Generating media”You can also use Genkit to generate media (like images) using supported models:
final response = await ai.generate( model: googleAI.gemini('gemini-2.5-flash-image'), prompt: 'An illustration of a dog wearing a space suit, photorealistic',);
if (response.media != null) { print('Generated image usage: ${response.media!.url}'); // The URL is typically a data URL that you can decode or display directly.}Middleware
Section titled “Middleware”Genkit supports middleware for intercepting and modifying requests. A common use case is retrying failed requests.
Retry middleware
Section titled “Retry middleware”final response = await ai.generate( model: googleAI.gemini('gemini-2.5-flash'), prompt: 'Reliable request', use: [ RetryMiddleware( maxRetries: 3, retryModel: true, statuses: [StatusName.UNAVAILABLE], ), ],);Consuming remote models
Section titled “Consuming remote models”When you serve a model as an HTTP endpoint (for example, using Shelf or Express), you can consume it from another Genkit application using defineRemoteModel:
final ai = Genkit();
final remoteModel = ai.defineRemoteModel( name: 'myRemoteModel', url: 'http://localhost:8080/googleai/gemini-2.5-flash',);
final response = await ai.generate( model: remoteModel, prompt: 'Hello!',);
print(response.text);