Skip to main content

Generating Text

Large language models (LLMs) generate text in response to prompts containing instructions and information. The AI SDK Core provides two primary functions for text generation:
  • generateText: Generates text in a single request
  • streamText: Streams text as it’s generated

generateText

Use generateText for non-interactive use cases where you need the complete response before proceeding. This is ideal for:
  • Batch processing
  • Email drafting
  • Content summarization
  • Agents that use tools

Basic Usage

import { generateText } from 'ai';
import { openai } from '@ai-sdk/openai';

const { text } = await generateText({
  model: openai('gpt-5'),
  prompt: 'Write a vegetarian lasagna recipe for 4 people.',
});

console.log(text);

System Messages

Use system messages to set the behavior and context for the model:
const { text } = await generateText({
  model: openai('gpt-5'),
  system: 'You are a professional writer. You write simple, clear, and concise content.',
  prompt: 'Summarize the following article in 3-5 sentences: ${article}',
});

Multi-Turn Conversations

For conversations, use the messages array:
import { generateText } from 'ai';
import { openai } from '@ai-sdk/openai';

const { text } = await generateText({
  model: openai('gpt-5'),
  messages: [
    { role: 'user', content: 'Hi, my name is Alice.' },
    { role: 'assistant', content: 'Hello Alice! How can I help you today?' },
    { role: 'user', content: 'What is my name?' },
  ],
});

console.log(text); // "Your name is Alice."

Result Object

The generateText function returns a comprehensive result object:
const result = await generateText({
  model: openai('gpt-5'),
  prompt: 'Explain TypeScript in one sentence.',
});

// Access different properties
console.log(result.text);          // The generated text
console.log(result.finishReason);  // Why generation stopped
console.log(result.usage);         // Token usage information
console.log(result.warnings);      // Any provider warnings

Result Properties

text
string
The generated text content
content
Array<ContentPart>
The structured content including text and tool calls
finishReason
'stop' | 'length' | 'content-filter' | 'tool-calls' | 'error' | 'other' | 'unknown'
The reason the model stopped generating
usage
object
Token usage information:
  • promptTokens: Tokens in the prompt
  • completionTokens: Tokens in the completion
  • totalTokens: Total tokens used
response
object
Response metadata including:
  • id: Response ID
  • modelId: Model used
  • timestamp: Response timestamp
  • messages: Generated messages
  • headers: HTTP response headers

onFinish Callback

Execute code when generation completes:
const result = await generateText({
  model: openai('gpt-5'),
  prompt: 'Invent a new holiday and describe its traditions.',
  onFinish({ text, finishReason, usage }) {
    // Save to database, log usage, etc.
    console.log('Generated text:', text);
    console.log('Tokens used:', usage.totalTokens);
  },
});

streamText

Use streamText for interactive applications where you want to display text as it’s generated. This provides a better user experience for:
  • Chatbots
  • Real-time content generation
  • Interactive assistants

Basic Usage

import { streamText } from 'ai';
import { openai } from '@ai-sdk/openai';

const result = streamText({
  model: openai('gpt-5'),
  prompt: 'Write a short story about a robot learning to paint.',
});

// Stream as an async iterable
for await (const textPart of result.textStream) {
  process.stdout.write(textPart);
}

Text Stream

The textStream property is both a ReadableStream and an AsyncIterable:
const result = streamText({
  model: openai('gpt-5'),
  prompt: 'Count to 10',
});

// Option 1: Async iteration
for await (const chunk of result.textStream) {
  console.log(chunk);
}

// Option 2: Stream reader
const reader = result.textStream.getReader();
while (true) {
  const { done, value } = await reader.read();
  if (done) break;
  console.log(value);
}

Full Stream

For advanced use cases, access all stream events with fullStream:
import { streamText } from 'ai';
import { openai } from '@ai-sdk/openai';

const result = streamText({
  model: openai('gpt-5'),
  prompt: 'Tell me about the solar system',
});

for await (const part of result.fullStream) {
  switch (part.type) {
    case 'text-delta':
      process.stdout.write(part.text);
      break;
    case 'finish':
      console.log('\nFinish reason:', part.finishReason);
      break;
  }
}

Stream Event Types

  • start: Stream begins
  • text-delta: New text chunk
  • text-end: Text generation complete
  • tool-call: Model called a tool
  • tool-result: Tool execution result
  • finish: Stream complete
  • error: An error occurred

Promises

streamText provides promises that resolve when streaming completes:
const result = streamText({
  model: openai('gpt-5'),
  prompt: 'Explain quantum computing',
});

// Access final values
const text = await result.text;           // Complete text
const usage = await result.usage;         // Token usage
const finishReason = await result.finishReason;

console.log('Final text:', text);
console.log('Tokens used:', usage.totalTokens);

Callbacks

onChunk

Process each chunk as it arrives:
const result = streamText({
  model: openai('gpt-5'),
  prompt: 'Write a poem',
  onChunk({ chunk }) {
    if (chunk.type === 'text-delta') {
      console.log('Chunk:', chunk.text);
    }
  },
});

onFinish

Execute code when streaming completes:
const result = streamText({
  model: openai('gpt-5'),
  prompt: 'Generate a story',
  onFinish({ text, usage, finishReason }) {
    console.log('Complete text:', text);
    console.log('Tokens used:', usage.totalTokens);
  },
});

onError

Handle errors in the stream:
const result = streamText({
  model: openai('gpt-5'),
  prompt: 'Generate content',
  onError({ error }) {
    console.error('Stream error:', error);
    // Log to error tracking service
  },
});

Common Parameters

Both generateText and streamText support these parameters:
model
LanguageModel
required
The language model to use (e.g., openai('gpt-5'))
prompt
string
Simple text prompt (cannot be used with messages)
messages
Array<Message>
Array of conversation messages
system
string
System message to set model behavior
maxOutputTokens
number
Maximum number of tokens to generate
temperature
number
Randomness in generation (0 = deterministic, higher = more random)
tools
object
Tools the model can use (see Tool Calling)
output
Output
Structured output specification (see Structured Data)

Examples

Email Draft Generator

import { generateText } from 'ai';
import { openai } from '@ai-sdk/openai';

const { text } = await generateText({
  model: openai('gpt-5'),
  system: 'You are a professional email writer. Write clear, concise emails.',
  prompt: `Write a follow-up email to a client about project status.
  
  Context: The redesign project is 80% complete and on track for next week's deadline.`,
});

console.log(text);

Interactive Chatbot

import { streamText } from 'ai';
import { openai } from '@ai-sdk/openai';

const messages = [
  { role: 'user', content: 'What are some good books about AI?' },
];

const result = streamText({
  model: openai('gpt-5'),
  messages,
});

// Display text as it streams
for await (const chunk of result.textStream) {
  process.stdout.write(chunk);
}

// Add response to conversation
const response = await result.response;
messages.push(...response.messages);

Next Steps

Structured Data

Generate type-safe structured data

Tool Calling

Enable models to use tools

Settings

Configure generation parameters

Prompt Engineering

Write effective prompts