Skip to main content

Quickstart

This quickstart guide will walk you through building your first AI application using the AI SDK. You’ll learn how to generate text, stream responses, and create structured outputs.

Prerequisites

Before you begin, make sure you have:

Setup

1

Create a new project

Create a new directory and initialize a Node.js project:
mkdir my-ai-app
cd my-ai-app
npm init -y
2

Install dependencies

Install the AI SDK and a provider package:
npm install ai @ai-sdk/openai
3

Configure your API key

Create a .env file and add your API key:
OPENAI_API_KEY=your_api_key_here
Never commit your .env file to version control. Add it to .gitignore.

Generate text

Let’s start with the most basic example - generating text from a prompt. Create a file called generate-text.ts:
import { generateText } from 'ai';
import { openai } from '@ai-sdk/openai';

const result = await generateText({
  model: openai('gpt-4o'),
  prompt: 'Invent a new holiday and describe its traditions.',
});

console.log(result.text);
console.log('Usage:', result.usage);
console.log('Finish reason:', result.finishReason);
Run the example:
node --loader ts-node/esm generate-text.ts

What’s happening here?

  • generateText: The core function for generating text completions
  • model: Specifies which AI model to use (GPT-4o in this case)
  • prompt: The input text that guides the model’s response
  • result.text: The generated text response
  • result.usage: Token usage information (prompt tokens, completion tokens, total tokens)
  • result.finishReason: Why the model stopped generating (e.g., “stop”, “length”)

Stream text

For longer responses, streaming provides a better user experience by showing results as they’re generated. Create a file called stream-text.ts:
import { streamText } from 'ai';
import { openai } from '@ai-sdk/openai';

const result = streamText({
  model: openai('gpt-4o'),
  prompt: 'Invent a new holiday and describe its traditions.',
});

// Stream the text
for await (const chunk of result.textStream) {
  process.stdout.write(chunk);
}

console.log('\n\nUsage:', await result.usage);
console.log('Finish reason:', await result.finishReason);
Run the example:
node --loader ts-node/esm stream-text.ts

Key differences

  • streamText: Returns a stream instead of waiting for the complete response
  • result.textStream: An async iterable that yields text chunks as they arrive
  • Properties like usage and finishReason are promises that resolve when streaming completes

Generate structured data

Generate type-safe structured outputs using Zod schemas:
1

Install Zod

npm install zod
2

Create the example

Create a file called generate-object.ts:
import { generateText, Output } from 'ai';
import { openai } from '@ai-sdk/openai';
import { z } from 'zod';

const { output } = await generateText({
  model: openai('gpt-4o'),
  output: Output.object({
    schema: z.object({
      recipe: z.object({
        name: z.string(),
        ingredients: z.array(
          z.object({
            name: z.string(),
            amount: z.string(),
          })
        ),
        steps: z.array(z.string()),
      }),
    }),
  }),
  prompt: 'Generate a lasagna recipe.',
});

console.log('Recipe:', output.recipe.name);
console.log('\nIngredients:');
output.recipe.ingredients.forEach(ingredient => {
  console.log(`- ${ingredient.amount} ${ingredient.name}`);
});
console.log('\nSteps:');
output.recipe.steps.forEach((step, index) => {
  console.log(`${index + 1}. ${step}`);
});
3

Run the example

node --loader ts-node/esm generate-object.ts

Why structured outputs?

  • Type safety: Full TypeScript support with schema validation
  • Reliability: The model is constrained to follow your schema
  • Integration: Easy to integrate with databases and APIs

Use tools

Tools allow AI models to perform actions and retrieve real-time information:
import { generateText, tool } from 'ai';
import { openai } from '@ai-sdk/openai';
import { z } from 'zod';

const result = await generateText({
  model: openai('gpt-4o'),
  tools: {
    weather: tool({
      inputSchema: z.object({
        location: z.string(),
      }),
      execute: async ({ location }) => {
        // In a real app, you'd call a weather API here
        return {
          location,
          temperature: 72,
          condition: 'sunny',
        };
      },
    }),
  },
  prompt: 'What is the weather in San Francisco?',
});

console.log('Response:', result.text);

// Access tool calls
for (const toolCall of result.toolCalls) {
  console.log('Tool called:', toolCall.toolName);
  console.log('Arguments:', toolCall.input);
}

// Access tool results
for (const toolResult of result.toolResults) {
  console.log('Tool result:', toolResult.output);
}

How tools work

  1. You define tools with input schemas and execute functions
  2. The model decides when to call tools based on the prompt
  3. The SDK automatically executes the tools and sends results back to the model
  4. The model uses the tool results to generate its final response

Using Vercel AI Gateway

If you prefer using the Vercel AI Gateway instead of direct provider packages:
import { generateText } from 'ai';

// No provider import needed - gateway is included in 'ai'
const result = await generateText({
  model: 'openai/gpt-4o', // Use string format: 'provider/model'
  prompt: 'What is an agent?',
});

console.log(result.text);
Make sure your .env file has:
AI_GATEWAY_API_KEY=your_gateway_key_here

Next steps

Now that you’ve learned the basics, explore more advanced features:

Build a chat interface

Use React, Vue, or Svelte hooks to build chat UIs

Create agents

Build autonomous agents with tools and multi-step reasoning

Explore providers

Learn about all available AI providers and models

API reference

Dive into the complete API documentation

Example projects

Check out complete example applications:

Get help

If you run into issues: