Skip to main content
Generates text and calls tools for a given prompt using a language model. This function does not stream the output. If you want to stream the output, use streamText instead.
import { generateText } from 'ai';

const result = await generateText({
  model: openai('gpt-4-turbo'),
  prompt: 'Invent a new holiday.',
});

console.log(result.text);

Parameters

model
LanguageModel
required
The language model to use.
prompt
string
A simple text prompt. You can either use prompt or messages but not both.
messages
Array<CoreMessage>
A list of messages. You can either use prompt or messages but not both.
system
string
A system message that will be part of the prompt.
tools
ToolSet
Tools that are accessible to and can be called by the model. The model needs to support calling tools.
toolChoice
ToolChoice
The tool choice strategy. Default: 'auto'.
maxOutputTokens
number
Maximum number of tokens to generate.
temperature
number
Temperature setting. The value is passed through to the provider. The range depends on the provider and model. It is recommended to set either temperature or topP, but not both.
topP
number
Nucleus sampling. The value is passed through to the provider. The range depends on the provider and model. It is recommended to set either temperature or topP, but not both.
topK
number
Only sample from the top K options for each subsequent token. Used to remove “long tail” low probability responses. Recommended for advanced use cases only. You usually only need to use temperature.
presencePenalty
number
Presence penalty setting. It affects the likelihood of the model to repeat information that is already in the prompt. The value is passed through to the provider. The range depends on the provider and model.
frequencyPenalty
number
Frequency penalty setting. It affects the likelihood of the model to repeatedly use the same words or phrases. The value is passed through to the provider. The range depends on the provider and model.
stopSequences
Array<string>
Stop sequences. If set, the model will stop generating text when one of the stop sequences is generated.
seed
number
The seed (integer) to use for random sampling. If set and supported by the model, calls will generate deterministic results.
maxRetries
number
default:"2"
Maximum number of retries. Set to 0 to disable retries.
abortSignal
AbortSignal
An optional abort signal that can be used to cancel the call.
timeout
number
An optional timeout in milliseconds. The call will be aborted if it takes longer than the specified timeout.
headers
Record<string, string>
Additional HTTP headers to be sent with the request. Only applicable for HTTP-based providers.
stopWhen
StopCondition | Array<StopCondition>
default:"stepCountIs(1)"
Condition for stopping the generation when there are tool results in the last step. When the condition is an array, any of the conditions can be met to stop the generation.
output
Output
Optional specification for parsing structured outputs from the LLM response.
activeTools
Array<keyof TOOLS>
Limits the tools that are available for the model to call without changing the tool call and result types in the result.
prepareStep
PrepareStepFunction
Optional function that you can use to provide different settings for a step.
experimental_repairToolCall
ToolCallRepairFunction
A function that attempts to repair a tool call that failed to parse.
experimental_download
DownloadFunction
Custom download function to use for URLs. By default, files are downloaded if the model does not support the URL for the given media type.
experimental_context
unknown
Context that is passed into tool execution.
experimental_telemetry
TelemetrySettings
Optional telemetry configuration (experimental).
providerOptions
ProviderOptions
Additional provider-specific options. They are passed through to the provider from the AI SDK and enable provider-specific functionality that can be fully encapsulated in the provider.
experimental_onStart
(event: OnStartEvent) => void
Callback invoked when generation begins, before any LLM calls.
experimental_onStepStart
(event: OnStepStartEvent) => void
Callback invoked when each step begins, before the provider is called.
experimental_onToolCallStart
(event: OnToolCallStartEvent) => void
Callback invoked before each tool execution begins.
experimental_onToolCallFinish
(event: OnToolCallFinishEvent) => void
Callback invoked after each tool execution completes.
onStepFinish
(event: OnStepFinishEvent) => void
Callback that is called when each step (LLM call) is finished, including intermediate steps.
onFinish
(event: OnFinishEvent) => void
Callback that is called when all steps are finished and the response is complete.

Returns

text
string
The generated text.
content
Array<ContentPart>
The content parts from the final step.
toolCalls
Array<ToolCall>
The tool calls from the final step.
toolResults
Array<ToolResult>
The tool results from the final step.
finishReason
FinishReason
The reason why the generation finished.
usage
LanguageModelUsage
The token usage of the final step.
totalUsage
LanguageModelUsage
The total token usage across all steps.
warnings
Array<CallWarning>
Warnings from the model provider (e.g., unsupported settings).
steps
Array<StepResult>
Details for all steps.
response
LanguageModelResponseMetadata
Response metadata from the final step.
output
OUTPUT
The parsed output (only when using the output parameter).

Examples

Basic text generation

import { generateText } from 'ai';
import { openai } from '@ai-sdk/openai';

const result = await generateText({
  model: openai('gpt-4-turbo'),
  prompt: 'Invent a new holiday.',
});

console.log(result.text);

With tools

import { generateText, tool } from 'ai';
import { openai } from '@ai-sdk/openai';
import { z } from 'zod';

const result = await generateText({
  model: openai('gpt-4-turbo'),
  prompt: 'What is the weather in San Francisco?',
  tools: {
    weather: tool({
      description: 'Get the weather for a location',
      inputSchema: z.object({
        location: z.string(),
      }),
      execute: async ({ location }) => {
        // Call weather API
        return { temperature: 72, condition: 'sunny' };
      },
    }),
  },
});

console.log(result.text);
console.log(result.toolCalls);
console.log(result.toolResults);

With structured output

import { generateText } from 'ai';
import { openai } from '@ai-sdk/openai';
import { z } from 'zod';

const result = await generateText({
  model: openai('gpt-4-turbo'),
  prompt: 'Generate a person profile',
  output: object({
    schema: z.object({
      name: z.string(),
      age: z.number(),
      occupation: z.string(),
    }),
  }),
});

console.log(result.output);