Provider architecture
The AI SDK uses a layered provider architecture:Language model specification
At the core is theLanguageModelV3 interface from @ai-sdk/provider:
Provider implementation
Providers implement this interface to work with the AI SDK. For example, here’s how the OpenAI provider is structured:Core functions
The AI SDK provides high-level functions likegenerateText and streamText that work with any provider:
Available providers
The AI SDK comes with a wide range of providers:Official providers
- xAI Grok Provider (
@ai-sdk/xai) - OpenAI Provider (
@ai-sdk/openai) - Azure OpenAI Provider (
@ai-sdk/azure) - Anthropic Provider (
@ai-sdk/anthropic) - Amazon Bedrock Provider (
@ai-sdk/amazon-bedrock) - Google Generative AI Provider (
@ai-sdk/google) - Google Vertex Provider (
@ai-sdk/google-vertex) - Mistral Provider (
@ai-sdk/mistral) - Cohere Provider (
@ai-sdk/cohere) - Groq Provider (
@ai-sdk/groq)
Community providers
The open-source community has created additional providers:- Ollama Provider (
ollama-ai-provider) - OpenRouter Provider (
@openrouter/ai-sdk-provider) - Cloudflare Workers AI Provider (
workers-ai-provider)
Using models
Basic usage
Switching providers
Switching providers is as simple as changing the model import:Model configuration
Providers can be configured with custom settings:Model capabilities
Different models support different capabilities:Model registry
For dynamic model selection, use the experimental model registry:Provider-specific options
You can pass provider-specific options usingproviderOptions: