Skip to main content
Prompts are instructions that you give a large language model (LLM) to tell it what to do. It’s like when you ask someone for directions; the clearer your question, the better the directions you’ll get. Many LLM providers offer complex interfaces for specifying prompts. They involve different roles and message types. While these interfaces are powerful, they can be hard to use and understand. The AI SDK simplifies prompting by supporting three types of prompts: text prompts, message prompts, and system prompts.

Text prompts

Text prompts are the simplest form - just a string. They are ideal for simple generation use cases.
import { generateText } from 'ai';
import { openai } from '@ai-sdk/openai';

const result = await generateText({
  model: openai('gpt-4'),
  prompt: 'Invent a new holiday and describe its traditions.',
});

console.log(result.text);

Dynamic prompts

You can use template literals to provide dynamic data:
const destination = 'Paris';
const lengthOfStay = 5;

const result = await generateText({
  model: openai('gpt-4'),
  prompt: 
    `I am planning a trip to ${destination} for ${lengthOfStay} days. ` +
    `Please suggest the best tourist activities for me to do.`,
});

System prompts

System prompts are instructions that guide the model’s behavior and responses. They work with both text and message prompts.
import { generateText } from 'ai';
import { openai } from '@ai-sdk/openai';

const result = await generateText({
  model: openai('gpt-4'),
  system:
    `You help planning travel itineraries. ` +
    `Respond to the users' request with a list ` +
    `of the best stops to make in their destination.`,
  prompt:
    `I am planning a trip to ${destination} for ${lengthOfStay} days. ` +
    `Please suggest the best tourist activities for me to do.`,
});

Message prompts

Message prompts are arrays of messages with different roles. They’re great for chat interfaces and multi-turn conversations.

Basic messages

import { generateText } from 'ai';
import { openai } from '@ai-sdk/openai';

const result = await generateText({
  model: openai('gpt-4'),
  messages: [
    { role: 'user', content: 'Hi!' },
    { role: 'assistant', content: 'Hello, how can I help?' },
    { role: 'user', content: 'Where can I buy the best Currywurst in Berlin?' },
  ],
});

Multi-modal messages

Messages can include multiple content types:

Text parts

const result = await generateText({
  model: openai('gpt-4'),
  messages: [
    {
      role: 'user',
      content: [
        {
          type: 'text',
          text: 'Where can I buy the best Currywurst in Berlin?',
        },
      ],
    },
  ],
});

Image parts

Images can be provided as buffers, base64 strings, or URLs:
import { generateText } from 'ai';
import { openai } from '@ai-sdk/openai';
import fs from 'fs';

const result = await generateText({
  model: openai('gpt-4o'),
  messages: [
    {
      role: 'user',
      content: [
        { type: 'text', text: 'Describe the image in detail.' },
        {
          type: 'image',
          image: fs.readFileSync('./data/comic-cat.png'),
        },
      ],
    },
  ],
});
Using a URL:
const result = await generateText({
  model: openai('gpt-4o'),
  messages: [
    {
      role: 'user',
      content: [
        { type: 'text', text: 'Describe the image in detail.' },
        {
          type: 'image',
          image: 'https://example.com/image.png',
        },
      ],
    },
  ],
});

File parts

Some models support file attachments like PDFs:
import { generateText } from 'ai';
import { google } from '@ai-sdk/google';
import fs from 'fs';

const result = await generateText({
  model: google('gemini-2.5-flash'),
  messages: [
    {
      role: 'user',
      content: [
        { type: 'text', text: 'What is the file about?' },
        {
          type: 'file',
          mediaType: 'application/pdf',
          data: fs.readFileSync('./data/example.pdf'),
          filename: 'example.pdf',
        },
      ],
    },
  ],
});

Message roles

User messages

User messages represent input from the user:
{
  role: 'user',
  content: 'What is the weather like today?'
}

Assistant messages

Assistant messages represent previous responses from the model:
{
  role: 'assistant',
  content: 'I can help you check the weather.'
}
Assistant messages can also include tool calls:
{
  role: 'assistant',
  content: [
    {
      type: 'tool-call',
      toolCallId: '12345',
      toolName: 'get-weather',
      input: { location: 'San Francisco' },
    },
  ],
}

System messages

System messages provide context and instructions:
{
  role: 'system',
  content: 'You are a helpful travel planning assistant.'
}

Tool messages

Tool messages contain the results of tool executions:
{
  role: 'tool',
  content: [
    {
      type: 'tool-result',
      toolCallId: '12345',
      toolName: 'get-weather',
      output: {
        type: 'json',
        value: { temperature: 72, conditions: 'sunny' },
      },
    },
  ],
}

Provider options

You can pass provider-specific metadata at different levels:

Function level

import { generateText } from 'ai';
import { openai } from '@ai-sdk/openai';

const result = await generateText({
  model: openai('o1'),
  providerOptions: {
    openai: {
      reasoningEffort: 'high',
    },
  },
  prompt: 'Solve this problem...',
});

Message level

import { generateText } from 'ai';
import { anthropic } from '@ai-sdk/anthropic';

const result = await generateText({
  model: anthropic('claude-3-5-sonnet-20241022'),
  messages: [
    {
      role: 'system',
      content: 'You are a helpful assistant.',
      providerOptions: {
        anthropic: { cacheControl: { type: 'ephemeral' } },
      },
    },
    {
      role: 'user',
      content: 'What is machine learning?',
    },
  ],
});

Message part level

import { generateText } from 'ai';
import { openai } from '@ai-sdk/openai';

const result = await generateText({
  model: openai('gpt-4o'),
  messages: [
    {
      role: 'user',
      content: [
        {
          type: 'text',
          text: 'Describe the image.',
          providerOptions: {
            openai: { imageDetail: 'high' },
          },
        },
        {
          type: 'image',
          image: imageBuffer,
          providerOptions: {
            openai: { imageDetail: 'high' },
          },
        },
      ],
    },
  ],
});

Prompt conversion

The AI SDK converts prompts to the format required by each provider:
// Your code
const result = await generateText({
  model: openai('gpt-4'),
  system: 'You are a helpful assistant.',
  messages: [
    { role: 'user', content: 'Hello!' },
  ],
});

// Internally converted to provider format
// {
//   messages: [
//     { role: 'system', content: 'You are a helpful assistant.' },
//     { role: 'user', content: 'Hello!' }
//   ]
// }
This abstraction is handled by the convertToLanguageModelPrompt function in the SDK core.