Providers Overview
Connect to any LLM provider
Copilot SDK supports multiple LLM providers through @yourgpt/llm-sdk. Switch providers without changing your frontend code.
All providers use the same API. Change one line in your backend to switch from OpenAI to Anthropic.
OpenAI
Anthropic
Google Generative AI
xAI
How It Works
Install the SDK and the provider's official SDK. Each provider returns a model instance that works with generateText() and streamText().
Installation
npm install @yourgpt/copilot-sdk @yourgpt/llm-sdk openaiOpenAI, Google Gemini, and xAI all use the openai SDK (OpenAI-compatible APIs).
npm install @yourgpt/copilot-sdk @yourgpt/llm-sdk @anthropic-ai/sdknpm install @yourgpt/copilot-sdk @yourgpt/llm-sdkNo additional SDK needed! Ollama uses native fetch for local inference.
Backend Setup
import { streamText } from '@yourgpt/llm-sdk';
import { openai } from '@yourgpt/llm-sdk/openai';
export async function POST(req: Request) {
const { messages } = await req.json();
const result = await streamText({
model: openai('gpt-5.2'),
system: 'You are a helpful assistant.',
messages,
});
return result.toTextStreamResponse();
}Frontend Setup
'use client';
import { CopilotProvider } from '@yourgpt/copilot-sdk/react';
export function Providers({ children }: { children: React.ReactNode }) {
return (
<CopilotProvider runtimeUrl="/api/chat">
{children}
</CopilotProvider>
);
}Switching Providers
Change the import and model - frontend stays the same:
import { streamText } from '@yourgpt/llm-sdk';
// OpenAI
import { openai } from '@yourgpt/llm-sdk/openai';
const model = openai('gpt-4o');
// Anthropic
import { anthropic } from '@yourgpt/llm-sdk/anthropic';
const model = anthropic('claude-3-5-sonnet-20241022');
// Google
import { google } from '@yourgpt/llm-sdk/google';
const model = google('gemini-2.0-flash');
// xAI (Grok)
import { xai } from '@yourgpt/llm-sdk/xai';
const model = xai('grok-3-fast');
// Ollama (Local)
import { createOllama } from '@yourgpt/llm-sdk/ollama';
const ollama = createOllama();
const model = ollama('llama3.1');
// Use any model
const result = await streamText({
model,
messages,
});Your tools, UI, and all frontend code remain unchanged. The SDK normalizes responses across providers.
Available Providers
| Provider | Import | SDK Required | Example |
|---|---|---|---|
| OpenAI | @yourgpt/llm-sdk/openai | openai | openai('gpt-4o') |
| Anthropic | @yourgpt/llm-sdk/anthropic | @anthropic-ai/sdk | anthropic('claude-3-5-sonnet-20241022') |
@yourgpt/llm-sdk/google | openai | google('gemini-2.0-flash') | |
| xAI | @yourgpt/llm-sdk/xai | openai | xai('grok-3-fast') |
| Ollama | @yourgpt/llm-sdk/ollama | None | ollama('llama3.1') |
Why So Few SDKs?
Most LLM providers now offer OpenAI-compatible APIs. This means you only need:
| SDK | Providers |
|---|---|
openai | OpenAI, Google Gemini, xAI Grok, Azure OpenAI, Groq, Together AI |
@anthropic-ai/sdk | Anthropic Claude (native SDK for full features) |
| None | Ollama (uses native fetch for local inference) |
Google and xAI use OpenAI-compatible endpoints. Ollama runs locally and needs no external SDK. We automatically configure the correct baseURL for each provider.
Provider Comparison
| Provider | Speed | Quality | Cost | Best For |
|---|---|---|---|---|
| OpenAI | Fast | Excellent | $$ | General use |
| Anthropic | Medium | Excellent | $$ | Long context, safety |
| Fast | Very Good | $ | Multimodal | |
| xAI | Ultra Fast | Excellent | $ | Speed-critical apps |
| Ollama | Varies | Good-Excellent | Free | Privacy, offline, development |