Skip to main content
A tool loop agent that runs tools in a loop. In each step, it calls the LLM, and if there are tool calls, it executes the tools and calls the LLM again in a new step with the tool results. The loop continues until:
  • A finish reason other than tool-calls is returned, or
  • A tool that is invoked does not have an execute function, or
  • A tool call needs approval, or
  • A stop condition is met (default stop condition is stepCountIs(20))
import { ToolLoopAgent } from 'ai';
import { openai } from '@ai-sdk/openai';
import { z } from 'zod';

const agent = new ToolLoopAgent({
  model: openai('gpt-4-turbo'),
  tools: {
    weather: tool({
      description: 'Get the weather for a location',
      inputSchema: z.object({
        location: z.string(),
      }),
      execute: async ({ location }) => {
        // Call weather API
        return { temperature: 72, condition: 'sunny' };
      },
    }),
  },
  instructions: 'You are a helpful assistant.',
});

const result = await agent.generate({
  prompt: 'What is the weather in San Francisco?',
});

Constructor

Parameters

model
LanguageModel
required
The language model to use.
tools
ToolSet
The tools that the agent can use.
instructions
string
System instructions for the agent. This is the system prompt that will be used for all calls.
id
string
Optional ID for the agent.
toolChoice
ToolChoice
The tool choice strategy. Default: 'auto'.
maxOutputTokens
number
Maximum number of tokens to generate.
temperature
number
Temperature setting.
topP
number
Nucleus sampling.
topK
number
Only sample from the top K options for each subsequent token.
presencePenalty
number
Presence penalty setting.
frequencyPenalty
number
Frequency penalty setting.
stopSequences
Array<string>
Stop sequences.
seed
number
The seed (integer) to use for random sampling.
stopWhen
StopCondition | Array<StopCondition>
default:"stepCountIs(20)"
Condition for stopping the agent when there are tool results in the last step.
output
Output
Optional specification for parsing structured outputs from the LLM response.
activeTools
Array<keyof TOOLS>
Limits the tools that are available for the model to call.
prepareStep
PrepareStepFunction
Optional function that you can use to provide different settings for a step.
prepareCall
PrepareCallFunction
Optional function that is called before each agent invocation to prepare the call arguments.
experimental_repairToolCall
ToolCallRepairFunction
A function that attempts to repair a tool call that failed to parse.
experimental_download
DownloadFunction
Custom download function to use for URLs.
experimental_context
unknown
Context that is passed into tool execution.
experimental_telemetry
TelemetrySettings
Optional telemetry configuration (experimental).
providerOptions
ProviderOptions
Additional provider-specific options.
experimental_onStart
(event: OnStartEvent) => void
Callback invoked when generation begins, before any LLM calls.
experimental_onStepStart
(event: OnStepStartEvent) => void
Callback invoked when each step begins, before the provider is called.
experimental_onToolCallStart
(event: OnToolCallStartEvent) => void
Callback invoked before each tool execution begins.
experimental_onToolCallFinish
(event: OnToolCallFinishEvent) => void
Callback invoked after each tool execution completes.
onStepFinish
(event: OnStepFinishEvent) => void
Callback that is called when each step (LLM call) is finished.
onFinish
(event: OnFinishEvent) => void
Callback that is called when all steps are finished.

Properties

id
string | undefined
The ID of the agent.
tools
TOOLS
The tools that the agent can use.
version
'agent-v1'
The version of the agent API.

Methods

generate()

Generates an output from the agent (non-streaming).

Parameters

prompt
string | Array<ModelMessage>
The prompt for the agent.
messages
Array<ModelMessage>
Alternative to prompt: provide a list of messages directly.
options
CALL_OPTIONS
Call-specific options (if configured in prepareCall).
abortSignal
AbortSignal
An optional abort signal that can be used to cancel the call.
timeout
number
An optional timeout in milliseconds.
experimental_onStart
(event: OnStartEvent) => void
Callback invoked when generation begins. Merges with the agent’s callback.
experimental_onStepStart
(event: OnStepStartEvent) => void
Callback invoked when each step begins. Merges with the agent’s callback.
experimental_onToolCallStart
(event: OnToolCallStartEvent) => void
Callback invoked before each tool execution. Merges with the agent’s callback.
experimental_onToolCallFinish
(event: OnToolCallFinishEvent) => void
Callback invoked after each tool execution. Merges with the agent’s callback.
onStepFinish
(event: OnStepFinishEvent) => void
Callback when each step finishes. Merges with the agent’s callback.
onFinish
(event: OnFinishEvent) => void
Callback when all steps finish. Merges with the agent’s callback.

Returns

Returns a Promise<GenerateTextResult> with the same properties as generateText.

stream()

Streams an output from the agent.

Parameters

Same parameters as generate(), plus:
experimental_transform
StreamTextTransform | Array<StreamTextTransform>
Optional stream transformations.

Returns

Returns a Promise<StreamTextResult> with the same properties as streamText.

Examples

Basic agent

import { ToolLoopAgent } from 'ai';
import { openai } from '@ai-sdk/openai';
import { z } from 'zod';

const agent = new ToolLoopAgent({
  model: openai('gpt-4-turbo'),
  tools: {
    weather: tool({
      description: 'Get the weather for a location',
      inputSchema: z.object({
        location: z.string(),
      }),
      execute: async ({ location }) => {
        // Call weather API
        return { temperature: 72, condition: 'sunny' };
      },
    }),
  },
  instructions: 'You are a helpful assistant.',
});

const result = await agent.generate({
  prompt: 'What is the weather in San Francisco?',
});

console.log(result.text);

With streaming

const agent = new ToolLoopAgent({
  model: openai('gpt-4-turbo'),
  tools: { /* ... */ },
  instructions: 'You are a helpful assistant.',
});

const result = await agent.stream({
  prompt: 'Tell me about the weather',
});

for await (const textPart of result.textStream) {
  process.stdout.write(textPart);
}

With callbacks

const agent = new ToolLoopAgent({
  model: openai('gpt-4-turbo'),
  tools: { /* ... */ },
  instructions: 'You are a helpful assistant.',
  onStepFinish: (event) => {
    console.log('Step', event.stepNumber, 'finished');
    console.log('Tool calls:', event.toolCalls);
  },
  onFinish: (event) => {
    console.log('Agent finished');
    console.log('Total usage:', event.totalUsage);
  },
});

const result = await agent.generate({
  prompt: 'What is the weather in San Francisco?',
});