Vercel AI SDK Beginner's Guide: Build a Powerful Next.js Chatbot

Learn how to build a secure, advanced AI chatbot with Next.js and Vercel AI SDK, integrating Google Gemini and custom tools. Step-by-step setup, code examples, and best practices for API-focused teams. Discover how Apidog streamlines your API workflow.

Rebecca Kovács

Rebecca Kovács

30 January 2026

Vercel AI SDK Beginner's Guide: Build a Powerful Next.js Chatbot

Are you looking to integrate advanced AI into your web applications, but frustrated by complex APIs and unclear best practices? The Vercel AI SDK makes building feature-rich, production-grade AI apps easier than ever. In this step-by-step guide, you’ll learn how to build a robust, interactive chatbot using Next.js and Google’s Gemini models—no prior AI experience required.

Whether you’re an API developer, backend engineer, or technical lead, this guide will help you master modern AI integration patterns, handle real-world scenarios, and boost your productivity with proven practices.


💡 Looking for a tool that automatically creates beautiful API documentation, boosts your development team's collaboration and productivity, and offers a more affordable alternative to Postman? Apidog is the all-in-one solution for API design, testing, and documentation.

button

Why Use the Vercel AI SDK?

Integrating Large Language Models (LLMs) into web apps traditionally required juggling multiple provider APIs, managing streaming, and securing sensitive keys. The Vercel AI SDK solves these pain points by providing:

By following this guide, you’ll learn not just the “how”, but also the “why” behind architecture decisions, enabling you to build maintainable, scalable AI-powered apps.


Foundations and Setup

Let’s start by preparing your environment and project structure.

Prerequisites

Before you begin, make sure you have:

Step 1: Create a Next.js Project

Next.js is the ideal React framework for AI applications, thanks to its server-centric architecture and powerful routing. Run:

npx create-next-app@latest vercel-ai-tutorial

Use these recommended settings:

Navigate into your project:

cd vercel-ai-tutorial

Step 2: Install Required Packages

Add the Vercel AI SDK, Google provider, React hooks, and Zod for schema validation:

npm install ai @ai-sdk/react @ai-sdk/google zod

Package breakdown:

Step 3: Secure Your API Key

Never hardcode secrets. Use environment variables:

touch .env.local

Add to .env.local:

GOOGLE_GENERATIVE_AI_API_KEY=YOUR_GOOGLE_AI_API_KEY

Replace with your real key. Next.js loads this automatically on the server side.


Building the Core: API Endpoint and Chat UI

Modern AI apps require a secure, maintainable client-server structure.

Architecture Overview

This separation protects credentials and keeps your app scalable.


Step 4: Implement the API Route

Create src/app/api/chat/route.ts:

import { google } from '@ai-sdk/google';
import { streamText } from 'ai';

export const maxDuration = 30;

export async function POST(req: Request) {
  try {
    const { messages } = await req.json();
    const result = await streamText({
      model: google('models/gemini-1.5-pro-latest'),
      messages,
    });
    return result.toDataStreamResponse();
  } catch (error) {
    if (error instanceof Error) {
      return new Response(JSON.stringify({ error: error.message }), { status: 500 });
    }
    return new Response(JSON.stringify({ error: 'An unknown error occurred' }), { status: 500 });
  }
}

Key points:


Step 5: Build the Chat UI

Open src/app/page.tsx and replace the content with:

'use client';

import { useChat } from '@ai-sdk/react';
import { useRef, useEffect } from 'react';

export default function Chat() {
  const { messages, input, handleInputChange, handleSubmit, isLoading, error } = useChat();
  const messagesContainerRef = useRef<HTMLDivElement>(null);

  useEffect(() => {
    if (messagesContainerRef.current) {
      messagesContainerRef.current.scrollTop = messagesContainerRef.current.scrollHeight;
    }
  }, [messages]);

  return (
    <div className="flex flex-col h-screen bg-gray-50">
      {/* Messages container */}
      <div ref={messagesContainerRef} className="flex-1 overflow-y-auto p-8 space-y-4">
        {messages.map(m => (
          <div
            key={m.id}
            className={`flex gap-3 ${m.role === 'user' ? 'justify-end' : 'justify-start'}`}
          >
            {m.role === 'user' && (
              <div className="w-10 h-10 rounded-full bg-blue-500 flex items-center justify-center text-white font-bold">U</div>
            )}
            <div
              className={`max-w-xl p-3 rounded-2xl shadow-md whitespace-pre-wrap ${
                m.role === 'user'
                  ? 'bg-blue-500 text-white rounded-br-none'
                  : 'bg-white text-black rounded-bl-none'
              }`}
            >
              <span className="font-bold block">{m.role === 'user' ? 'You' : 'AI Assistant'}</span>
              {m.content}
            </div>
            {m.role !== 'user' && (
              <div className="w-10 h-10 rounded-full bg-gray-700 flex items-center justify-center text-white font-bold">AI</div>
            )}
          </div>
        ))}
      </div>

      {/* Input form */}
      <div className="p-4 bg-white border-t">
        <form onSubmit={handleSubmit} className="flex items-center gap-4 max-w-4xl mx-auto">
          <input
            className="flex-1 p-3 border rounded-full focus:outline-none focus:ring-2 focus:ring-blue-500"
            value={input}
            placeholder="Ask me anything..."
            onChange={handleInputChange}
            disabled={isLoading}
          />
          <button
            type="submit"
            className="px-6 py-3 bg-blue-500 text-white rounded-full font-semibold hover:bg-blue-600 disabled:bg-blue-300 disabled:cursor-not-allowed"
            disabled={isLoading}
          >
            Send
          </button>
        </form>
        {error && (
          <p className="text-red-500 mt-2 text-center">{error.message}</p>
        )}
      </div>
    </div>
  );
}

What’s happening:


Step 6: Run the Application

Start your development server:

npm run dev

Visit http://localhost:3000 and interact with your new AI chatbot. You’ll see streaming responses and a polished interface.


Advanced Features: Unlocking AI Tooling

Out-of-the-box, your bot answers questions from its training data. But what if you want it to fetch live weather, do calculations, or access external APIs? That’s where Tools come in.

What Are “Tools” in the Vercel AI SDK?

Tools are server-side functions that the LLM can invoke. You define a description and schema; the model decides when to use them during conversation, enabling powerful multi-step reasoning.

Example:
Your bot can answer, “What’s the weather in London in Celsius?” by:

  1. Calling a getWeather tool for London
  2. Using a convertFahrenheitToCelsius tool on the result
  3. Replying in natural language

Step 7: Add Tool Support to the API

Update src/app/api/chat/route.ts:

import { google } from '@ai-sdk/google';
import { streamText, tool } from 'ai';
import { z } from 'zod';

export const maxDuration = 30;

export async function POST(req: Request) {
  const { messages } = await req.json();

  const result = await streamText({
    model: google('models/gemini-1.5-pro-latest'),
    messages,
    tools: {
      getWeather: tool({
        description: 'Get the current weather for a specific location. Always returns temperature in Fahrenheit.',
        parameters: z.object({
          location: z.string().describe('The city and state, e.g., San Francisco, CA'),
        }),
        execute: async ({ location }) => {
          // Simulated weather response; replace with real API as needed
          return {
            temperature: Math.floor(Math.random() * (100 - 30 + 1) + 30),
            high: Math.floor(Math.random() * (100 - 80 + 1) + 80),
            low: Math.floor(Math.random() * (50 - 30 + 1) + 30),
            conditions: ['Sunny', 'Cloudy', 'Rainy'][Math.floor(Math.random() * 3)],
          };
        },
      }),
      convertFahrenheitToCelsius: tool({
        description: 'Convert a temperature from Fahrenheit to Celsius.',
        parameters: z.object({
          temperature: z.number().describe('The temperature in Fahrenheit'),
        }),
        execute: async ({ temperature }) => {
          return {
            celsius: Math.round((temperature - 32) * (5 / 9)),
          };
        },
      }),
    },
  });

  return result.toDataStreamResponse();
}

Tip:
Swap simulated weather data for your preferred weather API when needed.


Step 8: Enable Multi-Step Tool Calls in the UI

Update the useChat call in src/app/page.tsx:

const { messages, input, handleInputChange, handleSubmit, isLoading, error } = useChat({
  experimental_sendExtraToolMessages: true,
});

This enables the hook to automatically handle tool results, letting the AI chain multiple function calls before responding.


Step 9: Display Tool Invocations in the UI

Enhance your message loop to visualize tool calls:

{m.toolInvocations?.map(tool => (
  <div key={tool.toolCallId} className="my-2 p-2 bg-gray-100 rounded text-sm text-gray-700">
    <p className="font-semibold">Tool Call: `{tool.toolName}`</p>
    <pre className="mt-1 p-1 bg-gray-200 rounded text-xs">
      {JSON.stringify(tool.args, null, 2)}
    </pre>
  </div>
))}
{m.content}

Add a typing indicator for better UX. In src/app/globals.css:

.typing-indicator span {
  height: 8px;
  width: 8px;
  background-color: #9E9EA1;
  border-radius: 50%;
  display: inline-block;
  animation: a 1.2s infinite ease-in-out;
}
.typing-indicator span:nth-child(1) { animation-delay: -0.4s; }
.typing-indicator span:nth-child(2) { animation-delay: -0.2s; }
@keyframes a {
  0%, 60%, 100% { transform: scale(0.2); }
  30% { transform: scale(1); }
}

Test by asking:
“What is the weather in New York in Celsius?”

You’ll see the AI invoke both tools, display their results, and respond with the final answer—demonstrating real multi-step workflow orchestration.


Next Steps: Take Your AI App Further

You’ve built a secure, feature-rich, and extensible AI chatbot from scratch. Here’s how you can expand its capabilities:

Why API Testing Matters for AI-Powered Apps

For API-driven teams, robust testing and documentation are critical—especially when integrating AI features. Apidog streamlines API design, testing, and documentation in a single platform, accelerating your workflow and reducing errors.

button

Explore more

What API keys or subscriptions do I need for OpenClaw (Moltbot/Clawdbot)?

What API keys or subscriptions do I need for OpenClaw (Moltbot/Clawdbot)?

A practical, architecture-first guide to OpenClaw credentials: which API keys you actually need, how to map providers to features, cost/security tradeoffs, and how to validate your OpenClaw integrations with Apidog.

12 February 2026

What Do You Need to Run OpenClaw (Moltbot/Clawdbot)?

What Do You Need to Run OpenClaw (Moltbot/Clawdbot)?

Do you really need a Mac Mini for OpenClaw? Usually, no. This guide breaks down OpenClaw architecture, hardware tradeoffs, deployment patterns, and practical API workflows so you can choose the right setup for local, cloud, or hybrid runs.

12 February 2026

What AI models does OpenClaw (Moltbot/Clawdbot) support?

What AI models does OpenClaw (Moltbot/Clawdbot) support?

A technical breakdown of OpenClaw’s model support across local and hosted providers, including routing, tool-calling behavior, heartbeat gating, sandboxing, and how to test your OpenClaw integrations with Apidog.

12 February 2026

Practice API Design-first in Apidog

Discover an easier way to build and use APIs