AI Gateway

AI Gateway is available in Beta on all plans and your use is subject to Vercel's Public Beta Agreement and AI Product Terms.

The AI Gateway is a proxy service from Vercel that routes model requests to various AI providers. It offers a unified API to multiple providers and gives you the ability to set budgets, monitor usage, load-balance requests, and manage fallbacks.

It is designed to work seamlessly with AI SDK 5 and can be used as a provider in your applications.

Some of the key features of the AI Gateway include:

  • Unified API: helps you switch between providers and models with minimal code changes
  • High Reliability: automatically retries requests to other providers if one fails
  • Spend Monitoring: monitor your spending across different providers
  • Load Balancing: distribute requests across multiple providers to optimize performance
  • Bring Your Own Key: use your own API keys for providers that require them

Start using the AI Gateway by:

  1. Let's create a Next.js app and install the AI SDK and AI Gateway packages:

    terminal
    npx create-next-app@latest my-gateway-app
    cd my-gateway-app
    pnpm install ai@beta
  2. Next, let's create an API key in the AI tab of the Vercel dashboard:

    1. From the Vercel dashboard, click the AI tab
    2. Click API keys on the left side bar
    3. Click Add key and proceed with Create key from the Dialog

    Once you have the API key, save it to .env.local at the root of your project so you can use it to authenticate your requests to the AI Gateway.

    .env.local
    AI_GATEWAY_API_KEY=your_api_key_here
  3. Then in the Next.js app, add a GET route handler that creates a gateway provider instance. It uses the API key and calls the generateText function to generate text based on a prompt passed as a query parameter.

    When you specify a model id as a plain string, the AI SDK will use the Vercel AI Gateway provider to route the request.

    The AI Gateway provider looks for the API key in the AI_GATEWAY_API_KEY environment variable by default. You can also specify the API key directly in the createGateway function, described more in the As part of an AI SDK function call section.

    app/api/chat/route.ts
    import { generateText } from 'ai';
     
    export async function GET() {
      const result = await generateText({
        model: 'xai/grok-3',
        prompt: 'Why is the sky blue?',
      });
      return Response.json(result);
    }
  4. Now run your Next.js app using pnpm dev and open your browser to http://localhost:3000/api/chat. You should see a response with the generated text.

The Vercel OIDC token is a way to authenticate your requests to the AI Gateway without needing to manage an API key. Vercel automatically generates the OIDC token that it associates with your Vercel project.

Vercel OIDC tokens are only valid for 12 hours, so you will need to refresh them periodically during local development. You can do this by running vercel env pull again.

  1. Before you can use the OIDC token during local development, ensure that you link your application to a Vercel project and that you pull the environment variables from Vercel. You can do this by running the following command in your terminal:

    terminal
    vercel link
    vercel env pull
  2. Then in your GET route handler you can directly use the gateway provider without needing to obtain an API key or set it in an environment variable:

    app/api/chat/route.ts
    import { generateText } from 'ai';
     
    export async function GET() {
      const result = await generateText({
        model: 'xai/grok-3',
        prompt: 'Why is the sky blue?',
      });
      return Response.json(result);
    }
  3. Now run your Next.js app using pnpm dev and open your browser to http://localhost:3000/api/chat. You should see a response with the generated text.

The AI Gateway supports using your own API key to authenticate requests. This is useful if you have special arrangements with AI providers that require you to use your own key.

API keys are scoped to be available throughout a Vercel team, so you can use the same key across multiple projects.

  1. First, retrieve an API key from the AI provider of your choice. This key will be used to authenticate requests through the AI Gateway.

    1. Go to the AI tab in your Vercel dashboard.
    2. Click on the Integrations section on the left sidebar.
    3. Find your provider from the list and click Add.
    4. In the dialog that appears, enter the key you retrieved from the provider.
    5. Ensure that the Enabled toggle is turned on so that the key is active.
    6. Click Add Key to complete the flow.
  2. Once the key is added, it will automatically be included in your requests to the AI Gateway. You can now use this key to authenticate your requests.

The AI Gateway's unified API is built to be flexible, allowing you to switch between different AI models and providers without rewriting parts of your application. This is useful for testing different models or when you want to change the underlying AI provider for cost or performance reasons.

Models are AI algorithms that process your input data to generate responses, such as Grok, GPT-4o, or Claude 4 Sonnet. Providers are the companies or services that host these models, such as xAI, OpenAI, or Anthropic.

For example, you can use the xai/grok-3 model from xAI or the openai/gpt-4 model from OpenAI, following the format provider/model-name.

In some cases, the same model can be hosted by multiple providers that include the model maker. An example is the Claude 4 Opus model available from both Anthropic and Amazon Bedrock.

Different providers may have different specifications for the same model such as different pricing and performance. You can choose the one that best fits your needs.

You can view the list of supported models and providers by following these steps:

  1. Go to the AI tab in your Vercel dashboard.
  2. Click on the Models section on the left sidebar.

There are two ways to specify the model and provider to use for an AI Gateway request:

You can specify the model and provider directly in your API calls by using the gateway provider. This allows you to easily switch models or providers for specific requests without affecting the rest of your application.

In the example below, we use the xai/grok-3 model from xAI as the default. You can change the model to any other supported model by changing the string passed to gateway().

app/api/chat/route.ts
import { generateText } from 'ai';
import { gateway } from '@ai-sdk/gateway';
import { NextRequest } from 'next/server';
 
export async function GET() {
  const result = await generateText({
    model: gateway('xai/grok-3'),
    prompt,
  });
  return Response.json(result);
}

So you can now test different models by changing the model parameter and opening your browser to http://localhost:3000/api/chat.

The example above uses the default gateway provider instance. You can also create a custom provider instance to use in your application. You might do this to specify an API key via a different environment variable, or to set a different base URL for the AI Gateway if you need to do so due to a corporate proxy server.

Here's an example:

First, you'd want to install the @ai-sdk/gateway package directly as a dependency in your project.

terminal
pnpm install @ai-sdk/gateway@beta

Then, you create a custom AI Gateway provider instance to use in your application.

app/api/chat/route.ts
import { generateText } from 'ai';
import { createGateway } from '@ai-sdk/gateway';
 
const gateway = createGateway({
  apiKey: process.env.AI_GATEWAY_API_KEY, // the default environment variable for the API key
  baseURL: 'https://ai-gateway.vercel.sh/v1/ai', // the default base URL
});
 
export async function GET() {
  const result = await generateText({
    model: gateway('xai/grok-3'),
    prompt: 'Why is the sky blue?',
  });
  return Response.json(result);
}

This can also be useful if you'd like to use a Gateway provider with the AI SDK Provider Registry.

The Vercel AI Gateway is the default provider for the AI SDK when a model is specified as a string. You can set a different provider as the default by assigning the provider instance to the globalThis.AI_SDK_DEFAULT_PROVIDER variable.

This is intended to be done in a file that runs before any other AI SDK calls. In the case of a Next.js application, you can do this in instrumentation.ts:

instrumentation.ts
import { openai } from '@ai-sdk/openai';
 
export async function register() {
  // This runs once when the Node.js runtime starts
  globalThis.AI_SDK_DEFAULT_PROVIDER = openai;
 
  // You can also do other initialization here
  console.log('App initialization complete');
}

Then, you can use the generateText function without specifying the provider in each call.

app/api/chat/route.ts
import { generateText } from 'ai';
import { NextRequest } from 'next/server';
 
export async function GET(request: NextRequest) {
  const { searchParams } = new URL(request.url);
  const prompt = searchParams.get('prompt');
 
  if (!prompt) {
    return Response.json({ error: 'Prompt is required' }, { status: 400 });
  }
 
  const result = await generateText({
    model: 'gpt-4o',
    prompt,
  });
 
  return Response.json(result);
}
Last updated on June 25, 2025