streamObject instead.
generateObject is deprecated. Use generateText with an output setting instead.Parameters
The language model to use.
The schema of the object that the model should generate.
Required unless using
output: 'enum' or output: 'no-schema'.Optional name of the output that should be generated.
Used by some providers for additional LLM guidance, e.g., via tool or schema name.
Optional description of the output that should be generated.
Used by some providers for additional LLM guidance, e.g., via tool or schema description.
The type of the output.
'object': The output is an object.'array': The output is an array.'enum': The output is an enum.'no-schema': The output is not a schema.
The enum values that the model should use. Required when
output: 'enum'.A simple text prompt. You can either use
prompt or messages but not both.A list of messages. You can either use
prompt or messages but not both.A system message that will be part of the prompt.
Maximum number of tokens to generate.
Temperature setting. The value is passed through to the provider. The range depends on the provider and model.
It is recommended to set either
temperature or topP, but not both.Nucleus sampling. The value is passed through to the provider. The range depends on the provider and model.
It is recommended to set either
temperature or topP, but not both.Only sample from the top K options for each subsequent token.
Used to remove “long tail” low probability responses.
Recommended for advanced use cases only. You usually only need to use temperature.
Presence penalty setting.
It affects the likelihood of the model to repeat information that is already in the prompt.
The value is passed through to the provider. The range depends on the provider and model.
Frequency penalty setting.
It affects the likelihood of the model to repeatedly use the same words or phrases.
The value is passed through to the provider. The range depends on the provider and model.
The seed (integer) to use for random sampling.
If set and supported by the model, calls will generate deterministic results.
Maximum number of retries. Set to 0 to disable retries.
An optional abort signal that can be used to cancel the call.
Additional HTTP headers to be sent with the request. Only applicable for HTTP-based providers.
A function that attempts to repair the raw output of the model to enable JSON parsing.
Custom download function to use for URLs.
By default, files are downloaded if the model does not support the URL for the given media type.
Optional telemetry configuration (experimental).
Additional provider-specific options. They are passed through to the provider from the AI SDK
and enable provider-specific functionality that can be fully encapsulated in the provider.
Returns
The generated object (typed according to the schema).
The reasoning text if the model supports reasoning output.
The reason why the generation finished.
The token usage of the generated response.
Warnings from the model provider (e.g., unsupported settings).
Response metadata.
Request metadata.
Additional provider-specific metadata.
Converts the object to a JSON response.