Tools and Tool Calling
Tools allow AI models to perform actions and retrieve information beyond their training data. The AI SDK Core provides a flexible, type-safe system for defining and executing tools.
Tools are functions that models can call to:
Fetch real-time data (weather, stock prices, news)
Query databases or APIs
Perform calculations
Execute commands
Interact with external systems
Use the tool helper to define tools with type-safe inputs and outputs:
import { generateText , tool } from 'ai' ;
import { openai } from '@ai-sdk/openai' ;
import { z } from 'zod' ;
const result = await generateText ({
model: openai ( 'gpt-5' ),
tools: {
weather: tool ({
description: 'Get the weather for a location' ,
inputSchema: z . object ({
location: z . string (). describe ( 'The location to get the weather for' ),
}),
execute : async ({ location }) => {
// Call weather API
const response = await fetch (
`https://api.weather.com/v1/current?location= ${ location } `
);
const data = await response . json ();
return {
location ,
temperature: data . temperature ,
condition: data . condition ,
};
},
}),
},
prompt: 'What is the weather in San Francisco?' ,
});
console . log ( result . text );
Every tool contains these properties:
A description of what the tool does. Helps the model decide when to use it.
Zod or JSON schema defining the tool’s input parameters. Used for validation and LLM guidance.
Optional function that executes the tool with validated inputs. Returns the tool result.
Enable strict schema validation (when supported by provider)
Use stopWhen to enable multi-step execution where the model can call tools and then use their results:
import { generateText , tool , stepCountIs } from 'ai' ;
import { openai } from '@ai-sdk/openai' ;
import { z } from 'zod' ;
const { text , steps } = await generateText ({
model: openai ( 'gpt-5' ),
tools: {
weather: tool ({
description: 'Get the weather in a location' ,
inputSchema: z . object ({
location: z . string (). describe ( 'The location to get the weather for' ),
}),
execute : async ({ location }) => ({
location ,
temperature: 72 + Math . floor ( Math . random () * 21 ) - 10 ,
}),
}),
},
stopWhen: stepCountIs ( 5 ), // Allow up to 5 steps
prompt: 'What is the weather in San Francisco?' ,
});
console . log ( text );
console . log ( 'Total steps:' , steps . length );
How It Works
Step 1 : Model receives the prompt and decides to call the weather tool
Tool Execution : The execute function runs and returns weather data
Step 2 : Model receives the tool result and generates a text response
Control when and which tools the model uses:
import { generateText , tool } from 'ai' ;
import { openai } from '@ai-sdk/openai' ;
import { z } from 'zod' ;
const result = await generateText ({
model: openai ( 'gpt-5' ),
tools: {
weather: tool ({
description: 'Get the weather in a location' ,
inputSchema: z . object ({ location: z . string () }),
execute : async ({ location }) => ({ temperature: 72 }),
}),
},
toolChoice: 'required' , // Force the model to call a tool
prompt: 'What is the weather in San Francisco?' ,
});
Model decides whether and which tools to call
Model must call a tool (can choose which one)
Model must not call any tools
{ type: 'tool', toolName: string }
Model must call the specified tool
// Force a specific tool
const result = await generateText ({
model: openai ( 'gpt-5' ),
tools: { weather , calculator },
toolChoice: { type: 'tool' , toolName: 'weather' },
prompt: 'What is the weather?' ,
});
Tools receive additional context in the second parameter:
import { tool } from 'ai' ;
import { z } from 'zod' ;
const myTool = tool ({
description: 'Process data' ,
inputSchema: z . object ({ data: z . string () }),
execute : async ({ data }, { toolCallId }) => {
console . log ( 'Tool call ID:' , toolCallId );
return { processed: data };
},
});
Messages
Access the conversation history:
const myTool = tool ({
description: 'Analyze conversation' ,
inputSchema: z . object ({ topic: z . string () }),
execute : async ({ topic }, { messages }) => {
console . log ( 'Previous messages:' , messages );
return { analysis: 'Based on conversation...' };
},
});
Abort Signals
Forward abort signals to long-running operations:
import { generateText , tool } from 'ai' ;
import { openai } from '@ai-sdk/openai' ;
import { z } from 'zod' ;
const result = await generateText ({
model: openai ( 'gpt-5' ),
abortSignal: myAbortSignal ,
tools: {
weather: tool ({
description: 'Get weather' ,
inputSchema: z . object ({ location: z . string () }),
execute : async ({ location }, { abortSignal }) => {
return fetch (
`https://api.weather.com/v1/current?location= ${ location } ` ,
{ signal: abortSignal } // Forward abort signal
);
},
}),
},
prompt: 'What is the weather in San Francisco?' ,
});
Callbacks
onStepFinish
Called after each step completes:
import { generateText } from 'ai' ;
import { openai } from '@ai-sdk/openai' ;
const result = await generateText ({
model: openai ( 'gpt-5' ),
tools: { /* tools */ },
prompt: 'Use tools to answer' ,
onStepFinish ({ stepNumber , text , toolCalls , toolResults , finishReason , usage }) {
console . log ( `Step ${ stepNumber } finished ( ${ finishReason } )` );
console . log ( 'Tool calls:' , toolCalls );
console . log ( 'Tool results:' , toolResults );
},
});
Monitor tool execution:
import { generateText } from 'ai' ;
import { openai } from '@ai-sdk/openai' ;
const result = await generateText ({
model: openai ( 'gpt-5' ),
tools: { /* tools */ },
prompt: 'Use tools' ,
experimental_onToolCallStart ({ toolName , toolCallId , input }) {
console . log ( `Starting ${ toolName } :` , input );
},
experimental_onToolCallFinish ({ toolName , toolCallId , output , error , durationMs }) {
if ( error ) {
console . error ( `Tool ${ toolName } failed after ${ durationMs } ms:` , error );
} else {
console . log ( `Tool ${ toolName } completed in ${ durationMs } ms:` , output );
}
},
});
Require user approval before executing sensitive tools:
import { tool } from 'ai' ;
import { z } from 'zod' ;
const deleteFile = tool ({
description: 'Delete a file' ,
inputSchema: z . object ({ path: z . string () }),
needsApproval: true , // Require approval
execute : async ({ path }) => {
// Only runs if approved
await fs . unlink ( path );
return { deleted: path };
},
});
Handling Approval Requests
import { generateText , type ModelMessage } from 'ai' ;
import { openai } from '@ai-sdk/openai' ;
const messages : ModelMessage [] = [
{ role: 'user' , content: 'Delete the old logs' },
];
const result = await generateText ({
model: openai ( 'gpt-5' ),
tools: { deleteFile },
messages ,
});
messages . push ( ... result . response . messages );
// Check for approval requests
for ( const part of result . content ) {
if ( part . type === 'tool-approval-request' ) {
console . log ( 'Tool:' , part . toolCall . toolName );
console . log ( 'Input:' , part . toolCall . input );
// Get user approval
const approved = await getUserApproval ( part . toolCall );
// Add approval response
messages . push ({
role: 'tool' ,
content: [{
type: 'tool-approval-response' ,
approvalId: part . approvalId ,
approved ,
reason: approved ? 'User approved' : 'User denied' ,
}],
});
}
}
// Continue with approval response
const result2 = await generateText ({
model: openai ( 'gpt-5' ),
tools: { deleteFile },
messages ,
});
Dynamic Approval
Make approval decisions based on input:
const payment = tool ({
description: 'Process a payment' ,
inputSchema: z . object ({
amount: z . number (),
recipient: z . string (),
}),
needsApproval : async ({ amount }) => amount > 1000 , // Only large amounts
execute : async ({ amount , recipient }) => {
return await processPayment ( amount , recipient );
},
});
Strict Mode
Enable strict schema validation for more reliable tool calls:
import { tool } from 'ai' ;
import { z } from 'zod' ;
const weatherTool = tool ({
description: 'Get weather' ,
inputSchema: z . object ({
location: z . string (),
units: z . enum ([ 'celsius' , 'fahrenheit' ]),
}),
strict: true , // Enable strict validation
execute : async ({ location , units }) => {
// Tool will only be called with valid inputs
return { temperature: 72 , units };
},
});
Not all providers support strict mode. For those that don’t, the option is ignored.
Provide example inputs to guide the model:
import { tool } from 'ai' ;
import { z } from 'zod' ;
const weatherTool = tool ({
description: 'Get weather for a location' ,
inputSchema: z . object ({
location: z . string (). describe ( 'City and country' ),
}),
inputExamples: [
{ input: { location: 'San Francisco, USA' } },
{ input: { location: 'London, UK' } },
{ input: { location: 'Tokyo, Japan' } },
],
execute : async ({ location }) => {
return { temperature: 72 };
},
});
Only Anthropic providers support input examples natively. Other providers ignore this setting.
Access typed tool results:
import { generateText , tool , type TypedToolCall , type TypedToolResult } from 'ai' ;
import { openai } from '@ai-sdk/openai' ;
import { z } from 'zod' ;
const tools = {
weather: tool ({
description: 'Get weather' ,
inputSchema: z . object ({ location: z . string () }),
execute : async ({ location }) => ({
location ,
temperature: 72 ,
condition: 'sunny' ,
}),
}),
};
type ToolCall = TypedToolCall < typeof tools >;
type ToolResult = TypedToolResult < typeof tools >;
const result = await generateText ({
model: openai ( 'gpt-5' ),
tools ,
prompt: 'What is the weather?' ,
});
// Fully typed tool results
for ( const toolResult of result . toolResults ) {
if ( toolResult . toolName === 'weather' ) {
console . log ( toolResult . output . temperature ); // number
console . log ( toolResult . output . condition ); // string
}
}
Error Handling
Handle tool-related errors:
import {
generateText ,
NoSuchToolError ,
InvalidToolInputError
} from 'ai' ;
import { openai } from '@ai-sdk/openai' ;
try {
const result = await generateText ({
model: openai ( 'gpt-5' ),
tools: { /* tools */ },
prompt: 'Use tools' ,
});
} catch ( error ) {
if ( NoSuchToolError . isInstance ( error )) {
console . log ( 'Model tried to call unknown tool' );
} else if ( InvalidToolInputError . isInstance ( error )) {
console . log ( 'Model called tool with invalid inputs' );
}
}
Tool execution errors appear in the result:
const { steps } = await generateText ({
model: openai ( 'gpt-5' ),
tools: { /* tools */ },
prompt: 'Use tools' ,
});
// Check for tool errors in steps
const toolErrors = steps . flatMap ( step =>
step . content . filter ( part => part . type === 'tool-error' )
);
for ( const error of toolErrors ) {
console . log ( 'Tool error:' , error . error );
console . log ( 'Tool name:' , error . toolName );
}
Examples
Weather Assistant
import { generateText , tool , stepCountIs } from 'ai' ;
import { openai } from '@ai-sdk/openai' ;
import { z } from 'zod' ;
const { text } = await generateText ({
model: openai ( 'gpt-5' ),
tools: {
weather: tool ({
description: 'Get current weather for a location' ,
inputSchema: z . object ({
location: z . string (),
units: z . enum ([ 'celsius' , 'fahrenheit' ]). optional (),
}),
execute : async ({ location , units = 'fahrenheit' }) => {
const response = await fetch (
`https://api.weather.com/v1/current?location= ${ location } &units= ${ units } `
);
return await response . json ();
},
}),
},
stopWhen: stepCountIs ( 3 ),
prompt: 'What should I wear today in San Francisco?' ,
});
console . log ( text );
import { generateText , tool , stepCountIs } from 'ai' ;
import { openai } from '@ai-sdk/openai' ;
import { z } from 'zod' ;
import { db } from './database' ;
const { text } = await generateText ({
model: openai ( 'gpt-5' ),
tools: {
queryUsers: tool ({
description: 'Query users from the database' ,
inputSchema: z . object ({
role: z . enum ([ 'admin' , 'user' , 'guest' ]). optional (),
limit: z . number (). max ( 100 ). optional (),
}),
execute : async ({ role , limit = 10 }) => {
const query = db . users . select ();
if ( role ) query . where ( 'role' , role );
return await query . limit ( limit ). execute ();
},
}),
},
stopWhen: stepCountIs ( 3 ),
prompt: 'How many admin users do we have?' ,
});
console . log ( text );
Next Steps
MCP Tools Use Model Context Protocol tools
Prompt Engineering Tips for effective tool usage
Structured Data Combine tools with structured output