Skip to main content
Generates a structured, typed object for a given prompt and schema using a language model. This function does not stream the output. If you want to stream the output, use streamObject instead.
generateObject is deprecated. Use generateText with an output setting instead.
import { generateObject } from 'ai';
import { openai } from '@ai-sdk/openai';
import { z } from 'zod';

const result = await generateObject({
  model: openai('gpt-4-turbo'),
  schema: z.object({
    name: z.string(),
    age: z.number(),
  }),
  prompt: 'Generate a person profile',
});

console.log(result.object);

Parameters

model
LanguageModel
required
The language model to use.
schema
FlexibleSchema
The schema of the object that the model should generate. Required unless using output: 'enum' or output: 'no-schema'.
schemaName
string
Optional name of the output that should be generated. Used by some providers for additional LLM guidance, e.g., via tool or schema name.
schemaDescription
string
Optional description of the output that should be generated. Used by some providers for additional LLM guidance, e.g., via tool or schema description.
output
'object' | 'array' | 'enum' | 'no-schema'
The type of the output.
  • 'object': The output is an object.
  • 'array': The output is an array.
  • 'enum': The output is an enum.
  • 'no-schema': The output is not a schema.
enum
Array<string>
The enum values that the model should use. Required when output: 'enum'.
prompt
string
A simple text prompt. You can either use prompt or messages but not both.
messages
Array<CoreMessage>
A list of messages. You can either use prompt or messages but not both.
system
string
A system message that will be part of the prompt.
maxOutputTokens
number
Maximum number of tokens to generate.
temperature
number
Temperature setting. The value is passed through to the provider. The range depends on the provider and model. It is recommended to set either temperature or topP, but not both.
topP
number
Nucleus sampling. The value is passed through to the provider. The range depends on the provider and model. It is recommended to set either temperature or topP, but not both.
topK
number
Only sample from the top K options for each subsequent token. Used to remove “long tail” low probability responses. Recommended for advanced use cases only. You usually only need to use temperature.
presencePenalty
number
Presence penalty setting. It affects the likelihood of the model to repeat information that is already in the prompt. The value is passed through to the provider. The range depends on the provider and model.
frequencyPenalty
number
Frequency penalty setting. It affects the likelihood of the model to repeatedly use the same words or phrases. The value is passed through to the provider. The range depends on the provider and model.
seed
number
The seed (integer) to use for random sampling. If set and supported by the model, calls will generate deterministic results.
maxRetries
number
default:"2"
Maximum number of retries. Set to 0 to disable retries.
abortSignal
AbortSignal
An optional abort signal that can be used to cancel the call.
headers
Record<string, string>
Additional HTTP headers to be sent with the request. Only applicable for HTTP-based providers.
experimental_repairText
RepairTextFunction
A function that attempts to repair the raw output of the model to enable JSON parsing.
experimental_download
DownloadFunction
Custom download function to use for URLs. By default, files are downloaded if the model does not support the URL for the given media type.
experimental_telemetry
TelemetrySettings
Optional telemetry configuration (experimental).
providerOptions
ProviderOptions
Additional provider-specific options. They are passed through to the provider from the AI SDK and enable provider-specific functionality that can be fully encapsulated in the provider.

Returns

object
RESULT
The generated object (typed according to the schema).
reasoning
string | undefined
The reasoning text if the model supports reasoning output.
finishReason
FinishReason
The reason why the generation finished.
usage
LanguageModelUsage
The token usage of the generated response.
warnings
Array<CallWarning>
Warnings from the model provider (e.g., unsupported settings).
response
LanguageModelResponseMetadata
Response metadata.
request
LanguageModelRequestMetadata
Request metadata.
providerMetadata
ProviderMetadata
Additional provider-specific metadata.
toJsonResponse
(init?: ResponseInit) => Response
Converts the object to a JSON response.

Examples

Object generation

import { generateObject } from 'ai';
import { openai } from '@ai-sdk/openai';
import { z } from 'zod';

const result = await generateObject({
  model: openai('gpt-4-turbo'),
  schema: z.object({
    name: z.string(),
    age: z.number(),
    occupation: z.string(),
  }),
  prompt: 'Generate a person profile for a software engineer',
});

console.log(result.object);
// { name: 'John Doe', age: 30, occupation: 'Software Engineer' }

Array generation

import { generateObject } from 'ai';
import { openai } from '@ai-sdk/openai';
import { z } from 'zod';

const result = await generateObject({
  model: openai('gpt-4-turbo'),
  output: 'array',
  schema: z.object({
    name: z.string(),
    color: z.string(),
  }),
  prompt: 'Generate 3 fruit names with their colors',
});

console.log(result.object);
// [
//   { name: 'Apple', color: 'red' },
//   { name: 'Banana', color: 'yellow' },
//   { name: 'Orange', color: 'orange' },
// ]

Enum generation

import { generateObject } from 'ai';
import { openai } from '@ai-sdk/openai';

const result = await generateObject({
  model: openai('gpt-4-turbo'),
  output: 'enum',
  enum: ['action', 'comedy', 'drama', 'horror', 'sci-fi'],
  prompt: 'Classify this movie: Inception',
});

console.log(result.object);
// 'sci-fi'