Skip to main content

Next.js with AI SDK

Learn how to build AI-powered Next.js applications using the AI SDK with the App Router.

Why Next.js?

Next.js is the ideal framework for AI applications:
  • Server Actions: Call AI functions directly from components
  • Route Handlers: Create streaming API endpoints
  • React Server Components: Optimize initial page loads
  • Edge Runtime: Deploy AI features globally
  • Streaming: Native support for streaming responses

Prerequisites

  • Node.js 18+
  • Basic knowledge of React and Next.js
  • Vercel AI Gateway API key

Quick Start

Create a new Next.js application:
pnpm create next-app@latest my-ai-app
cd my-ai-app
Install AI SDK packages:
pnpm add ai @ai-sdk/react
Configure your API key:
echo "AI_GATEWAY_API_KEY=your-api-key" > .env.local

Core Patterns

1. Text Generation

Generate text on-demand with a button click. Client Component:
'use client';

import { useState } from 'react';

export default function Page() {
  const [generation, setGeneration] = useState('');
  const [isLoading, setIsLoading] = useState(false);

  const handleGenerate = async () => {
    setIsLoading(true);
    
    const response = await fetch('/api/generate', {
      method: 'POST',
      body: JSON.stringify({
        prompt: 'Why is the sky blue?',
      }),
    });
    
    const data = await response.json();
    setGeneration(data.text);
    setIsLoading(false);
  };

  return (
    <div>
      <button onClick={handleGenerate} disabled={isLoading}>
        {isLoading ? 'Generating...' : 'Generate'}
      </button>
      {generation && <p>{generation}</p>}
    </div>
  );
}
API Route:
import { generateText } from 'ai';

export async function POST(req: Request) {
  const { prompt } = await req.json();

  const { text } = await generateText({
    model: 'openai/gpt-4o',
    system: 'You are a helpful assistant.',
    prompt,
  });

  return Response.json({ text });
}

2. Streaming Chat

Build a real-time chat interface with the useChat hook. Client Component:
'use client';

import { useChat } from '@ai-sdk/react';

export default function Chat() {
  const { messages, input, handleInputChange, handleSubmit } = useChat();

  return (
    <div className="flex flex-col w-full max-w-md py-24 mx-auto">
      {messages.map(m => (
        <div key={m.id} className="whitespace-pre-wrap">
          {m.role === 'user' ? 'User: ' : 'AI: '}
          {m.parts.map((part, i) => {
            if (part.type === 'text') {
              return <span key={i}>{part.text}</span>;
            }
          })}
        </div>
      ))}

      <form onSubmit={handleSubmit}>
        <input
          className="fixed bottom-0 w-full max-w-md p-2 mb-8 border border-gray-300 rounded shadow-xl"
          value={input}
          placeholder="Say something..."
          onChange={handleInputChange}
        />
      </form>
    </div>
  );
}
API Route:
import { streamText, convertToModelMessages, UIMessage } from 'ai';

export const maxDuration = 30;

export async function POST(req: Request) {
  const { messages }: { messages: UIMessage[] } = await req.json();

  const result = streamText({
    model: 'openai/gpt-4o',
    messages: await convertToModelMessages(messages),
  });

  return result.toUIMessageStreamResponse();
}

3. Structured Output

Generate structured data with type safety. Client Component:
'use client';

import { useState } from 'react';

interface Recipe {
  name: string;
  ingredients: string[];
  steps: string[];
}

export default function Page() {
  const [recipe, setRecipe] = useState<Recipe | null>(null);

  const generateRecipe = async () => {
    const response = await fetch('/api/recipe', {
      method: 'POST',
      body: JSON.stringify({ dish: 'chocolate cake' }),
    });
    
    const data = await response.json();
    setRecipe(data.recipe);
  };

  return (
    <div>
      <button onClick={generateRecipe}>Generate Recipe</button>
      {recipe && (
        <div>
          <h2>{recipe.name}</h2>
          <h3>Ingredients:</h3>
          <ul>
            {recipe.ingredients.map((ing, i) => (
              <li key={i}>{ing}</li>
            ))}
          </ul>
          <h3>Steps:</h3>
          <ol>
            {recipe.steps.map((step, i) => (
              <li key={i}>{step}</li>
            ))}
          </ol>
        </div>
      )}
    </div>
  );
}
API Route:
import { generateObject, Output } from 'ai';
import { z } from 'zod';

export async function POST(req: Request) {
  const { dish } = await req.json();

  const { object: recipe } = await generateObject({
    model: 'openai/gpt-4o',
    prompt: `Generate a recipe for ${dish}`,
    output: Output.object({
      schema: z.object({
        name: z.string(),
        ingredients: z.array(z.string()),
        steps: z.array(z.string()),
      }),
    }),
  });

  return Response.json({ recipe });
}

4. Tool Calling

Extend AI capabilities with custom tools.
import { streamText, convertToModelMessages, tool, UIMessage } from 'ai';
import { z } from 'zod';

export async function POST(req: Request) {
  const { messages }: { messages: UIMessage[] } = await req.json();

  const result = streamText({
    model: 'openai/gpt-4o',
    messages: await convertToModelMessages(messages),
    tools: {
      getWeather: tool({
        description: 'Get the current weather for a location',
        inputSchema: z.object({
          city: z.string().describe('The city name'),
        }),
        execute: async ({ city }) => {
          // Fetch weather data
          const response = await fetch(
            `https://api.weatherapi.com/v1/current.json?key=${process.env.WEATHER_API_KEY}&q=${city}`,
          );
          const data = await response.json();
          
          return {
            temperature: data.current.temp_c,
            condition: data.current.condition.text,
          };
        },
      }),
    },
  });

  return result.toUIMessageStreamResponse();
}

5. Multi-Modal Input

Handle images and files in chat.
'use client';

import { useChat } from '@ai-sdk/react';
import { DefaultChatTransport } from 'ai';
import { useState, useRef } from 'react';
import Image from 'next/image';

export default function Chat() {
  const [files, setFiles] = useState<FileList | undefined>();
  const fileInputRef = useRef<HTMLInputElement>(null);
  
  const { messages, input, handleInputChange, sendMessage } = useChat({
    transport: new DefaultChatTransport({ api: '/api/chat' }),
  });

  const handleSubmit = async (e: React.FormEvent) => {
    e.preventDefault();

    const fileParts = files
      ? await convertFilesToDataURLs(files)
      : [];

    sendMessage({
      role: 'user',
      parts: [{ type: 'text', text: input }, ...fileParts],
    });

    setFiles(undefined);
    if (fileInputRef.current) fileInputRef.current.value = '';
  };

  return (
    <div>
      {messages.map(m => (
        <div key={m.id}>
          {m.role}: 
          {m.parts.map((part, i) => {
            if (part.type === 'text') return <span key={i}>{part.text}</span>;
            if (part.type === 'file' && part.mediaType?.startsWith('image/'))
              return <Image key={i} src={part.url} width={300} height={300} alt="" />;
          })}
        </div>
      ))}

      <form onSubmit={handleSubmit}>
        <input
          type="file"
          accept="image/*"
          onChange={e => setFiles(e.target.files || undefined)}
          ref={fileInputRef}
        />
        <input
          value={input}
          onChange={handleInputChange}
          placeholder="Say something..."
        />
        <button type="submit">Send</button>
      </form>
    </div>
  );
}

async function convertFilesToDataURLs(files: FileList) {
  return Promise.all(
    Array.from(files).map(
      file =>
        new Promise<{ type: 'file'; mediaType: string; url: string }>(
          (resolve, reject) => {
            const reader = new FileReader();
            reader.onload = () =>
              resolve({
                type: 'file',
                mediaType: file.type,
                url: reader.result as string,
              });
            reader.onerror = reject;
            reader.readAsDataURL(file);
          },
        ),
    ),
  );
}

6. Server Actions

Call AI functions directly from Server Components.
import { generateText } from 'ai';

async function generateSummary(text: string) {
  'use server';
  
  const { text: summary } = await generateText({
    model: 'openai/gpt-4o',
    prompt: `Summarize this text: ${text}`,
  });
  
  return summary;
}

export default function Page() {
  return (
    <form
      action={async (formData: FormData) => {
        'use server';
        const text = formData.get('text') as string;
        const summary = await generateSummary(text);
        console.log(summary);
      }}
    >
      <textarea name="text" />
      <button type="submit">Summarize</button>
    </form>
  );
}

Advanced Patterns

Rate Limiting

Implement rate limiting with Upstash:
import { Ratelimit } from '@upstash/ratelimit';
import { Redis } from '@upstash/redis';
import { streamText } from 'ai';

const ratelimit = new Ratelimit({
  redis: Redis.fromEnv(),
  limiter: Ratelimit.slidingWindow(10, '10 s'),
});

export async function POST(req: Request) {
  const ip = req.headers.get('x-forwarded-for') ?? 'anonymous';
  const { success } = await ratelimit.limit(ip);

  if (!success) {
    return new Response('Rate limit exceeded', { status: 429 });
  }

  // Continue with AI logic...
}

Caching Responses

import { streamText } from 'ai';
import { kv } from '@vercel/kv';

export async function POST(req: Request) {
  const { messages } = await req.json();
  const cacheKey = `chat:${JSON.stringify(messages)}`;

  // Check cache
  const cached = await kv.get(cacheKey);
  if (cached) {
    return Response.json(cached);
  }

  const result = await streamText({
    model: 'openai/gpt-4o',
    messages,
  });

  // Cache for 1 hour
  await kv.set(cacheKey, result, { ex: 3600 });

  return result.toUIMessageStreamResponse();
}

Authentication

Protect routes with authentication:
import { auth } from '@/lib/auth';
import { streamText } from 'ai';

export async function POST(req: Request) {
  const session = await auth();
  
  if (!session?.user) {
    return new Response('Unauthorized', { status: 401 });
  }

  // Continue with AI logic...
}

Deployment

Environment Variables

Set in Vercel Dashboard or .env.local:
AI_GATEWAY_API_KEY=your_key
OPENAI_API_KEY=your_key  # if using direct provider access

Deploy to Vercel

pnpm install -g vercel
vercel

Edge Runtime

Optimize for global performance:
import { streamText } from 'ai';

export const runtime = 'edge';

export async function POST(req: Request) {
  // Your AI logic
}

Best Practices

  1. Use maxDuration: Set timeout for API routes
    export const maxDuration = 30;
    
  2. Handle Errors: Provide user-friendly error messages
    try {
      const result = await generateText(...);
    } catch (error) {
      return Response.json({ error: 'Failed to generate' }, { status: 500 });
    }
    
  3. Type Safety: Use TypeScript for better DX
    interface ChatMessage {
      role: 'user' | 'assistant';
      content: string;
    }
    
  4. Loading States: Show feedback during generation
    {isLoading ? <Spinner /> : <Result />}
    
  5. Streaming: Use streaming for better UX
    const result = streamText({ ... });
    return result.toUIMessageStreamResponse();
    

Examples

Resources