DEV Community

Cover image for Getting Started with Building AI Apps Using Vercel AI SDK
Carlos Chao(El Frontend)
Carlos Chao(El Frontend) Subscriber

Posted on

Getting Started with Building AI Apps Using Vercel AI SDK

Building an AI‑powered app with Vercel AI SDK is mostly about wiring three things together:

  1. A streaming API route
  2. Strongly‑typed helpers (tools, MCP, or structured objects)
  3. A lightweight UI hook that keeps the chat flow reactive

In this quick primer you’ll cover the bare‑bones essentials: install the Vercel AI SDK, set up a Next.js API route that streams tokens in real time, and get a taste of key capabilities like streaming text and objects, function‑calling tools, and the Model Context Protocol.

Goal

Ship a tiny but functional AI chatbot that streams responses in real‑time.

You only need three moving parts:

  1. API route → talks to the LLM
  2. Helpers (Tools / MCP / typed objects) → give the LLM super‑powers
  3. UI hook → shows messages as they arrive

1. Core Ideas (read this first)

Concept Why it matters One‑liner
Streaming Text Users see words appear instantly, not after a long pause. streamText() returns tokens as an async iterator.
Streaming Objects Need JSON, not prose? Ask the model to emit objects that match a Zod schema. streamObject()
Multi‑modal Send images, audio or PDFs along with text. Works out‑of‑the‑box with GPT‑4o & Gemini.
AI Tools Let the model call your code (DB queries, weather, etc.). Define a tool() with execute().
MCP Open standard so any LLM can discover your tools automatically. Experimental client ships with the SDK.

Keep these in the back of your mind; the rest of the guide shows them in action.


2. Project Setup (2 commands)

# ① Install SDK + OpenAI provider
pnpm add ai @ai-sdk/openai

# ② Create (or use) a Next.js app
npx create-next-app my-ai-app --ts
Enter fullscreen mode Exit fullscreen mode

The SDK works everywhere TypeScript runs—Next.js, SvelteKit, Expo, plain Node, you name it.


3. Back‑end: Create a Streaming Route

import { openai } from '@ai-sdk/openai';
import { streamText } from 'ai';

export async function POST(req: Request) {
  const { messages } = await req.json();

  const { textStream } = await streamText({
    model: openai('gpt-4o'),
    prompt: messages,
  });

  // Sends proper headers so the client receives a stream
  return textStream.toDataStreamResponse();
}
Enter fullscreen mode Exit fullscreen mode

What’s happening?

streamText() talks to GPT‑4o, then pipes tokens straight back to the browser.


4. Front‑end: Drop‑in useChat()

'use client';

import { useChat } from '@ai-sdk/react';

export default function Chat() {
  const { messages, input, handleInputChange, handleSubmit } = useChat();

  return (
    <form onSubmit={handleSubmit} className="space-y-4 max-w-xl mx-auto">
      <div className="border rounded p-4 h-64 overflow-y-auto">
        {messages.map(m => (
          <p key={m.id} className="mb-2">
            <strong>{m.role}:</strong> {m.content}
          </p>
        ))}
      </div>

      <input
        value={input}
        onChange={handleInputChange}
        className="border p-2 w-full"
        placeholder="Ask me anything…"
      />
    </form>
  );
}
Enter fullscreen mode Exit fullscreen mode

That’s it—your chat UI now streams messages as fast as the model returns them.


5. Bonus: Add a Weather Tool (Function Calling)

import { z } from 'zod';
import { tool, generateText } from 'ai';

const weatherTool = tool({
  description: 'Get the weather in a location',
  parameters: z.object({
    location: z.string().describe('City or place'),
  }),
  execute: async ({ location }) => ({
    location,
    temperature: 68 + Math.floor(Math.random() * 11) - 5, // fake data
  }),
});

const result = await generateText({
  model: openai('gpt-4o'),
  tools: { weather: weatherTool },
  prompt: 'What’s the weather in San Francisco?',
});
Enter fullscreen mode Exit fullscreen mode

Now the LLM can call your function, get the result, and include it in its answer—no extra API round‑trips.


6. Ship It

  1. RoutestreamText or streamObject
  2. UIuseChat() (text) or experimental_useObject() (JSON)
  3. Enhance → Add tools, expose them via MCP
  4. Deploy → Push to Vercel; streaming just works

Next Steps

  • Use Vercel templates like Pinecone‑RAG (vector search) or Supabase‑starter (file storage).
  • Turn on tracing (AI SDK v3.3) with Langfuse or OpenTelemetry.
  • Watch for v5 alpha—better multi‑modal, more hooks.

With these basics you can ship a real‑time AI feature this weekend. Happy hacking!

Top comments (0)