generateObject instead.
streamObject is deprecated. Use streamText with an output setting instead.Parameters
The language model to use.
The schema of the object that the model should generate.
Required unless using
output: 'enum' or output: 'no-schema'.Optional name of the output that should be generated.
Used by some providers for additional LLM guidance, e.g., via tool or schema name.
Optional description of the output that should be generated.
Used by some providers for additional LLM guidance, e.g., via tool or schema description.
The type of the output.
'object': The output is an object.'array': The output is an array.'enum': The output is an enum.'no-schema': The output is not a schema.
The enum values that the model should use. Required when
output: 'enum'.A simple text prompt. You can either use
prompt or messages but not both.A list of messages. You can either use
prompt or messages but not both.A system message that will be part of the prompt.
Maximum number of tokens to generate.
Temperature setting. The value is passed through to the provider. The range depends on the provider and model.
It is recommended to set either
temperature or topP, but not both.Nucleus sampling. The value is passed through to the provider. The range depends on the provider and model.
It is recommended to set either
temperature or topP, but not both.Only sample from the top K options for each subsequent token.
Used to remove “long tail” low probability responses.
Recommended for advanced use cases only. You usually only need to use temperature.
Presence penalty setting.
It affects the likelihood of the model to repeat information that is already in the prompt.
The value is passed through to the provider. The range depends on the provider and model.
Frequency penalty setting.
It affects the likelihood of the model to repeatedly use the same words or phrases.
The value is passed through to the provider. The range depends on the provider and model.
The seed (integer) to use for random sampling.
If set and supported by the model, calls will generate deterministic results.
Maximum number of retries. Set to 0 to disable retries.
An optional abort signal that can be used to cancel the call.
Additional HTTP headers to be sent with the request. Only applicable for HTTP-based providers.
A function that attempts to repair the raw output of the model to enable JSON parsing.
Custom download function to use for URLs.
By default, files are downloaded if the model does not support the URL for the given media type.
Optional telemetry configuration (experimental).
Additional provider-specific options. They are passed through to the provider from the AI SDK
and enable provider-specific functionality that can be fully encapsulated in the provider.
Callback that is invoked when an error occurs during streaming.
You can use it to log errors. The stream processing will pause until the callback promise is resolved.
Callback that is called when the LLM response and the final object validation are finished.
Returns
A stream of partial object updates. The stream returns deep partial objects as they are generated.
A stream of array elements. Only available when
output: 'array'.A text stream that returns only the generated text deltas.
A stream with all events, including partial objects, text deltas, and finish events.
A promise that resolves to the final generated object (typed according to the schema).
A promise that resolves to the token usage of the generated response.
A promise that resolves to the reason why the generation finished.
A promise that resolves to warnings from the model provider (e.g., unsupported settings).
A promise that resolves to response metadata.
A promise that resolves to request metadata.
A promise that resolves to additional provider-specific metadata.
Creates a simple text stream response. The response is a
text/plain stream that streams the text parts.Pipes the text stream to a Node.js response-like object.