Frontend React hooks for AI-powered user interfaces with Vercel AI SDK v6. **Version**: AI SDK v6.0.42 (Stable) **Framework**: React 18+/19, Next.js 14+/15+ **Last Updated**: 2026-01-20 * * *
.parts array instead of .content:// ❌ v5 (OLD) {messages.map(m => ( <div key={m.id}>{m.content}</div> ))} // ✅ v6 (NEW) {messages.map(m => ( <div key={m.id}> {m.parts.map((part, i) => { if (part.type === 'text') return <span key={i}>{part.text}</span>; if (part.type === 'tool-invocation') return <ToolCall key={i} tool={part} />; if (part.type === 'file') return <FilePreview key={i} file={part} />; return null; })} </div> ))}
text - Text content with .text propertytool-invocation - Tool calls with .toolName, .args, .resultfile - File attachments with .mimeType, .datareasoning - Model reasoning (when available)source - Source citationsInferAgentUIMessage<typeof agent>:import { useChat } from '@ai-sdk/react'; import type { InferAgentUIMessage } from 'ai'; import { myAgent } from './agent'; export default function AgentChat() { const { messages, sendMessage } = useChat<InferAgentUIMessage<typeof myAgent>>({ api: '/api/chat', }); // messages are now type-checked against agent schema } `**4\. Tool Approval Workflows (Human-in-the-Loop)** Request user confirmation before executing tools:` import { useChat } from '@ai-sdk/react'; import { useState } from 'react'; export default function ChatWithApproval() { const { messages, sendMessage, addToolApprovalResponse } = useChat({ api: '/api/chat', }); const handleApprove = (toolCallId: string) => { addToolApprovalResponse({ toolCallId, approved: true, // or false to deny }); }; return ( <div> {messages.map(message => ( <div key={message.id}> {message.toolInvocations?.map(tool => ( tool.state === 'awaiting-approval' && ( <div key={tool.toolCallId}> <p>Approve tool call: {tool.toolName}?</p> <button onClick={() => handleApprove(tool.toolCallId)}> Approve </button> <button onClick={() => addToolApprovalResponse({ toolCallId: tool.toolCallId, approved: false })}> Deny </button> </div> ) ))} </div> ))} </div> ); } `**5\. Auto-Submit Capability** Automatically continue conversation after handling approvals:` import { useChat, lastAssistantMessageIsCompleteWithApprovalResponses } from '@ai-sdk/react'; export default function AutoSubmitChat() { const { messages, sendMessage } = useChat({ api: '/api/chat', sendAutomaticallyWhen: lastAssistantMessageIsCompleteWithApprovalResponses, // Automatically resubmit after all approval responses provided }); } `**6\. Structured Output in Chat** Generate structured data alongside tool calling (previously only available in `useObject`):` import { useChat } from '@ai-sdk/react'; import { z } from 'zod'; const schema = z.object({ summary: z.string(), sentiment: z.enum(['positive', 'neutral', 'negative']), }); export default function StructuredChat() { const { messages, sendMessage } = useChat({ api: '/api/chat', // Server can now stream structured output with chat messages }); }
const { messages, input, handleInputChange, handleSubmit, append } = useChat(); <form onSubmit={handleSubmit}> <input value={input} onChange={handleInputChange} /> </form> `**v5 (NEW - CORRECT):**` const { messages, sendMessage } = useChat(); const [input, setInput] = useState(''); <form onSubmit={(e) => { e.preventDefault(); sendMessage({ content: input }); setInput(''); }}> <input value={input} onChange={(e) => setInput(e.target.value)} /> </form>
input, handleInputChange, handleSubmit no longer existappend() → sendMessage(): New method for sending messagesonResponse removed: Use onFinish insteadinitialMessages → controlled mode: Use messages prop for full controlmaxSteps removed: Handle on server-side onlyreferences/use-chat-migration.md for complete migration guide.⚠️ Deprecation Notice:useAssistantis deprecated as of AI SDK v5. OpenAI Assistants API v2 will sunset on August 26, 2026. For new projects, useuseChatwith custom backend logic instead. See the openai-assistants skill for migration guidance.
import { useAssistant } from '@ai-sdk/react';'use client'; import { useAssistant } from '@ai-sdk/react'; import { useState, FormEvent } from 'react'; export default function AssistantChat() { const { messages, sendMessage, isLoading, error } = useAssistant({ api: '/api/assistant', }); const [input, setInput] = useState(''); const handleSubmit = (e: FormEvent) => { e.preventDefault(); sendMessage({ content: input }); setInput(''); }; return ( <div> {messages.map(m => ( <div key={m.id}> <strong>{m.role}:</strong> {m.content} </div> ))} <form onSubmit={handleSubmit}> <input value={input} onChange={(e) => setInput(e.target.value)} disabled={isLoading} /> </form> {error && <div>{error.message}</div>} </div> ); }
references/top-ui-errors.md for complete documentation. Quick reference:SyntaxError: Unexpected token in JSON at position X// ✅ CORRECT return result.toDataStreamResponse(); // ❌ WRONG return new Response(result.textStream);
// App Router - use toDataStreamResponse() export async function POST(req: Request) { const result = streamText({ /* ... */ }); return result.toDataStreamResponse(); // ✅ } // Pages Router - use pipeDataStreamToResponse() export default async function handler(req, res) { const result = streamText({ /* ... */ }); return result.pipeDataStreamToResponse(res); // ✅ }
body option captured at first render only.// ❌ WRONG - body captured once const { userId } = useUser(); const { messages } = useChat({ body: { userId }, // Stale! }); // ✅ CORRECT - use controlled mode const { userId } = useUser(); const { messages, sendMessage } = useChat(); sendMessage({ content: input, data: { userId }, // Fresh on each send });
// ❌ WRONG useEffect(() => { saveMessages(messages); }, [messages, saveMessages]); // saveMessages triggers re-render! // ✅ CORRECT useEffect(() => { saveMessages(messages); }, [messages]); // Only depend on messages
references/top-ui-errors.md for 13 more common errors (18 total documented).// ✅ GOOD - Streaming (shows tokens as they arrive) const { messages } = useChat({ api: '/api/chat' }); // ❌ BAD - Non-streaming (user waits for full response) const response = await fetch('/api/chat', { method: 'POST' });
{isLoading && <div>AI is typing...</div>}{isLoading && <button onClick={stop}>Stop</button>}useEffect(() => { messagesEndRef.current?.scrollIntoView({ behavior: 'smooth' }); }, [messages]); `**Disable input while loading:**` <input disabled={isLoading} />
references/streaming-patterns.md for comprehensive best practices.useChat or useCompletion in effects (auto-resume, initial messages), guard against double execution to prevent duplicate API calls and token waste.'use client'; import { useChat } from '@ai-sdk/react'; import { useEffect } from 'react'; export default function Chat() { const { messages, sendMessage, resumeStream } = useChat({ api: '/api/chat', resume: true, }); useEffect(() => { // ❌ Triggers twice in strict mode → two concurrent streams sendMessage({ content: 'Hello' }); // or resumeStream(); }, []); } `**Solution:**` // ✅ Use ref to track execution import { useRef } from 'react'; const hasSentRef = useRef(false); useEffect(() => { if (hasSentRef.current) return; hasSentRef.current = true; sendMessage({ content: 'Hello' }); }, []); // For resumeStream specifically: const hasResumedRef = useRef(false); useEffect(() => { if (!autoResume || hasResumedRef.current || status === 'streaming') return; hasResumedRef.current = true; resumeStream(); }, [autoResume, resumeStream, status]);
{ "dependencies": { "ai": "^6.0.8", "@ai-sdk/react": "^3.0.6", "@ai-sdk/openai": "^3.0.2", "react": "^18.3.0", "zod": "^3.24.2" } } `**Legacy (v5):**` { "dependencies": { "ai": "^5.0.99", "@ai-sdk/react": "^1.0.0", "@ai-sdk/openai": "^2.0.68" } }
templates/:references/ for: