generateText instead.
Parameters
The language model to use.
A simple text prompt. You can either use
prompt or messages but not both.A list of messages. You can either use
prompt or messages but not both.A system message that will be part of the prompt.
Tools that are accessible to and can be called by the model. The model needs to support calling tools.
The tool choice strategy. Default:
'auto'.Maximum number of tokens to generate.
Temperature setting. The value is passed through to the provider. The range depends on the provider and model.
It is recommended to set either
temperature or topP, but not both.Nucleus sampling. The value is passed through to the provider. The range depends on the provider and model.
It is recommended to set either
temperature or topP, but not both.Only sample from the top K options for each subsequent token.
Used to remove “long tail” low probability responses.
Recommended for advanced use cases only. You usually only need to use temperature.
Presence penalty setting.
It affects the likelihood of the model to repeat information that is already in the prompt.
The value is passed through to the provider. The range depends on the provider and model.
Frequency penalty setting.
It affects the likelihood of the model to repeatedly use the same words or phrases.
The value is passed through to the provider. The range depends on the provider and model.
Stop sequences. If set, the model will stop generating text when one of the stop sequences is generated.
The seed (integer) to use for random sampling.
If set and supported by the model, calls will generate deterministic results.
Maximum number of retries. Set to 0 to disable retries.
An optional abort signal that can be used to cancel the call.
An optional timeout in milliseconds. The call will be aborted if it takes longer than the specified timeout.
Additional HTTP headers to be sent with the request. Only applicable for HTTP-based providers.
Condition for stopping the generation when there are tool results in the last step.
When the condition is an array, any of the conditions can be met to stop the generation.
Optional specification for parsing structured outputs from the LLM response.
Limits the tools that are available for the model to call without changing the tool call and result types in the result.
Optional function that you can use to provide different settings for a step.
A function that attempts to repair a tool call that failed to parse.
Optional stream transformations. They are applied in the order they are provided.
The stream transformations must maintain the stream structure for streamText to work correctly.
Custom download function to use for URLs.
By default, files are downloaded if the model does not support the URL for the given media type.
Whether to include raw chunks from the provider in the stream.
When enabled, you will receive raw chunks with type ‘raw’ that contain the unprocessed data from the provider.
This allows access to cutting-edge provider features not yet wrapped by the AI SDK.
Context that is passed into tool execution.
Optional telemetry configuration (experimental).
Additional provider-specific options. They are passed through to the provider from the AI SDK
and enable provider-specific functionality that can be fully encapsulated in the provider.
Callback that is called for each chunk of the stream.
The stream processing will pause until the callback promise is resolved.
Callback that is invoked when an error occurs during streaming.
You can use it to log errors. The stream processing will pause until the callback promise is resolved.
Callback invoked when generation begins, before any LLM calls.
Callback invoked when each step begins, before the provider is called.
Callback invoked before each tool execution begins.
Callback invoked after each tool execution completes.
Callback that is called when each step (LLM call) is finished, including intermediate steps.
Callback that is called when the LLM response and all request tool executions are finished.
The usage is the combined usage of all steps.
Callback that is called when the stream is aborted.
Returns
A text stream that returns only the generated text deltas. You can use it as an async iterable or call
textStream.getReader() to get a reader.A stream with all events, including text deltas, tool calls, tool results, and metadata.
A promise that resolves to the total token usage.
A promise that resolves to the finish reason.
A promise that resolves to the details for all steps.
Creates a simple text stream response for easier integration with the Vercel AI SDK UI hooks. The response is a
text/plain stream that streams the text parts.Pipes the text stream to a Node.js response-like object.