Building an AI‑powered app with Vercel AI SDK is mostly about wiring three things together:
- A streaming API route
- Strongly‑typed helpers (tools, MCP, or structured objects)
- A lightweight UI hook that keeps the chat flow reactive
In this quick primer you’ll cover the bare‑bones essentials: install the Vercel AI SDK, set up a Next.js API route that streams tokens in real time, and get a taste of key capabilities like streaming text and objects, function‑calling tools, and the Model Context Protocol.
Goal
Ship a tiny but functional AI chatbot that streams responses in real‑time.
You only need three moving parts:
- API route → talks to the LLM
- Helpers (Tools / MCP / typed objects) → give the LLM super‑powers
- UI hook → shows messages as they arrive
1. Core Ideas (read this first)
Concept | Why it matters | One‑liner |
---|---|---|
Streaming Text | Users see words appear instantly, not after a long pause. |
streamText() returns tokens as an async iterator. |
Streaming Objects | Need JSON, not prose? Ask the model to emit objects that match a Zod schema. | streamObject() |
Multi‑modal | Send images, audio or PDFs along with text. | Works out‑of‑the‑box with GPT‑4o & Gemini. |
AI Tools | Let the model call your code (DB queries, weather, etc.). | Define a tool() with execute() . |
MCP | Open standard so any LLM can discover your tools automatically. | Experimental client ships with the SDK. |
Keep these in the back of your mind; the rest of the guide shows them in action.
2. Project Setup (2 commands)
# ① Install SDK + OpenAI provider
pnpm add ai @ai-sdk/openai
# ② Create (or use) a Next.js app
npx create-next-app my-ai-app --ts
The SDK works everywhere TypeScript runs—Next.js, SvelteKit, Expo, plain Node, you name it.
3. Back‑end: Create a Streaming Route
import { openai } from '@ai-sdk/openai';
import { streamText } from 'ai';
export async function POST(req: Request) {
const { messages } = await req.json();
const { textStream } = await streamText({
model: openai('gpt-4o'),
prompt: messages,
});
// Sends proper headers so the client receives a stream
return textStream.toDataStreamResponse();
}
What’s happening?
streamText()
talks to GPT‑4o, then pipes tokens straight back to the browser.
4. Front‑end: Drop‑in useChat()
'use client';
import { useChat } from '@ai-sdk/react';
export default function Chat() {
const { messages, input, handleInputChange, handleSubmit } = useChat();
return (
<form onSubmit={handleSubmit} className="space-y-4 max-w-xl mx-auto">
<div className="border rounded p-4 h-64 overflow-y-auto">
{messages.map(m => (
<p key={m.id} className="mb-2">
<strong>{m.role}:</strong> {m.content}
</p>
))}
</div>
<input
value={input}
onChange={handleInputChange}
className="border p-2 w-full"
placeholder="Ask me anything…"
/>
</form>
);
}
That’s it—your chat UI now streams messages as fast as the model returns them.
5. Bonus: Add a Weather Tool (Function Calling)
import { z } from 'zod';
import { tool, generateText } from 'ai';
const weatherTool = tool({
description: 'Get the weather in a location',
parameters: z.object({
location: z.string().describe('City or place'),
}),
execute: async ({ location }) => ({
location,
temperature: 68 + Math.floor(Math.random() * 11) - 5, // fake data
}),
});
const result = await generateText({
model: openai('gpt-4o'),
tools: { weather: weatherTool },
prompt: 'What’s the weather in San Francisco?',
});
Now the LLM can call your function, get the result, and include it in its answer—no extra API round‑trips.
6. Ship It
-
Route →
streamText
orstreamObject
-
UI →
useChat()
(text) orexperimental_useObject()
(JSON) - Enhance → Add tools, expose them via MCP
- Deploy → Push to Vercel; streaming just works
Next Steps
- Use Vercel templates like Pinecone‑RAG (vector search) or Supabase‑starter (file storage).
- Turn on tracing (AI SDK v3.3) with Langfuse or OpenTelemetry.
- Watch for v5 alpha—better multi‑modal, more hooks.
With these basics you can ship a real‑time AI feature this weekend. Happy hacking!
Top comments (0)