Vercel AI SDK's LangChain template using Next
This example demonstrates how to use the AI SDK with Next.js, LangChain, LangGraph, and OpenAI to create AI-powered streaming applications.
/)Basic chat example using LangChain's ChatOpenAI with message streaming and the @ai-sdk/langchain adapter.
/completion)Simple text completion using the useCompletion hook with LangChain streaming:
useCompletion: Uses AI SDK's completion hook for single-turn text generationChatOpenAItoUIMessageStream: Converts LangChain stream to AI SDK formatimport { ChatOpenAI } from '@langchain/openai';
import { toUIMessageStream } from '@ai-sdk/langchain';
const model = new ChatOpenAI({ model: 'gpt-4o-mini' });
const stream = await model.stream([{ role: 'user', content: prompt }]);
return createUIMessageStreamResponse({
stream: toUIMessageStream(stream),
});
/langgraph)Demonstrates the @ai-sdk/langchain adapter with LangGraph:
toBaseMessages: Converts AI SDK UIMessage to LangChain BaseMessage formattoUIMessageStream: Converts LangGraph streams to AI SDK UIMessageChunk formatThis example shows how to integrate a LangGraph agent with the AI SDK's useChat hook.
/multimodal)Demonstrates sending images to the model for analysis using the @ai-sdk/langchain adapter:
image_url format for vision modelsThis example showcases the multimodal input support in convertUserContent() which handles images and files.
/image-generation)Demonstrates generating images as multimodal output using OpenAI's image generation tool:
ChatOpenAI with useResponsesApi: true to access built-in toolstools.imageGeneration() from @langchain/openaiimport { ChatOpenAI, tools } from '@langchain/openai';
const model = new ChatOpenAI({
model: 'gpt-4o',
useResponsesApi: true,
});
const modelWithImageGeneration = model.bindTools([
tools.imageGeneration({
size: '1024x1024',
quality: 'medium',
outputFormat: 'png',
}),
]);
/createAgent)Showcases LangChain's createAgent with the AI SDK adapter:
createAgent()@langchain/core/toolstoUIMessageStream/hitl)Demonstrates LangChain's humanInTheLoopMiddleware for requiring user approval before executing sensitive tool actions:
humanInTheLoopMiddleware: Middleware that intercepts tool calls and requests user approvaladdToolApprovalResponse with AI SDK's dynamic-tool partsMemorySaver to maintain conversation state across approvalsimport { createAgent, humanInTheLoopMiddleware } from 'langchain';
import { MemorySaver } from '@langchain/langgraph';
const agent = createAgent({
model,
tools: [sendEmailTool, deleteFileTool, searchTool],
checkpointer: new MemorySaver(),
middleware: [
humanInTheLoopMiddleware({
interruptOn: {
send_email: { allowedDecisions: ['approve', 'edit', 'reject'] },
delete_file: { allowedDecisions: ['approve', 'reject'] },
search: false, // Auto-approve safe operations
},
}),
],
});
/custom-data)Demonstrates custom streaming events from LangGraph tools:
config.writer()type field becomes data-{type} events (e.g., data-progress)id field to persist data in message.parts for renderingid) is delivered via onData callback only/langsmith)Connect directly to a LangGraph app from the browser using LangSmithDeploymentTransport:
LangSmithDeploymentTransport to create a transport for client-side communicationDeploy the example using Vercel:
Execute create-next-app with npm, Yarn, or pnpm to bootstrap the example:
npx create-next-app --example https://github.com/vercel/ai/tree/main/examples/next-langchain next-langchain-app
yarn create next-app --example https://github.com/vercel/ai/tree/main/examples/next-langchain next-langchain-app
pnpm create next-app --example https://github.com/vercel/ai/tree/main/examples/next-langchain next-langchain-app
To run the example locally you need to:
.env.local.pnpm install to install the required dependencies.pnpm dev to launch the development server.import { toBaseMessages } from '@ai-sdk/langchain';
// Simple one-line conversion - no factory functions needed!
const langchainMessages = await toBaseMessages(uiMessages);
import { toBaseMessages, toUIMessageStream } from '@ai-sdk/langchain';
// Convert messages
const langchainMessages = await toBaseMessages(messages);
// Stream from graph
const stream = await graph.stream(
{ messages: langchainMessages },
{ streamMode: ['values', 'messages'] },
);
// Return UI stream response
return createUIMessageStreamResponse({
stream: toUIMessageStream(stream),
});
import { createAgent } from 'langchain';
import { tool } from '@langchain/core/tools';
import { toBaseMessages, toUIMessageStream } from '@ai-sdk/langchain';
import { createUIMessageStreamResponse } from 'ai';
import { z } from 'zod';
// Define a tool using LangChain's tool decorator
const weatherTool = tool(
async ({ city }) => `Weather in ${city}: sunny, 72°F`,
{
name: 'get_weather',
description: 'Get the current weather in a location',
schema: z.object({ city: z.string() }),
},
);
// Create a LangChain agent
const agent = createAgent({
model: 'openai:gpt-4o-mini',
tools: [weatherTool],
systemPrompt: 'You are a helpful weather assistant.',
});
// Convert messages and stream with the adapter
const langchainMessages = await toBaseMessages(messages);
const stream = await agent.stream(
{ messages: langchainMessages },
{ streamMode: ['values', 'messages'] },
);
return createUIMessageStreamResponse({
stream: toUIMessageStream(stream),
});
import { tool, type ToolRuntime } from 'langchain';
import { z } from 'zod';
const analyzeDataTool = tool(
async ({ dataSource }, config: ToolRuntime) => {
// Emit progress updates - becomes 'data-progress' in the UI
config.writer?.({
type: 'progress',
id: 'analysis-1', // Include 'id' to persist in message.parts
step: 'processing',
message: 'Running analysis...',
progress: 50,
});
// ... perform work ...
return 'Analysis complete';
},
{
name: 'analyze_data',
description: 'Analyze data with progress updates',
schema: z.object({ dataSource: z.string() }),
},
);
// Enable 'custom' stream mode
const stream = await graph.stream(
{ messages: langchainMessages },
{ streamMode: ['values', 'messages', 'custom'] },
);
'use client';
import { useChat } from '@ai-sdk/react';
import { LangSmithDeploymentTransport } from '@ai-sdk/langchain';
import { useMemo } from 'react';
function Chat() {
const transport = useMemo(
() =>
new LangSmithDeploymentTransport({
// Local development server:
url: 'http://localhost:2024',
// Or for a LangSmith deployment:
// url: 'https://your-deployment.langsmith.app',
// apiKey: process.env.NEXT_PUBLIC_LANGSMITH_API_KEY,
}),
[],
);
const { messages, sendMessage, status } = useChat({
transport,
});
// ... render chat UI
}
The @ai-sdk/langchain adapter supports both graph.stream() and streamEvents(). Here's when to use each:
graph.stream() with streamMode| Use Case | Why |
|---|---|
| LangGraph workflows | Optimized for state-based graphs with values, messages, updates modes |
| Tool execution tracking | Clean tool call lifecycle with messages mode |
| Custom data streaming | Use custom mode with config.writer() for typed events |
| State snapshots | Get full state after each step with values mode |
| Production apps | Simpler integration with AI SDK's toUIMessageStream |
const stream = await graph.stream(
{ messages },
{ streamMode: ['values', 'messages'] },
);
streamEvents()| Use Case | Why |
|---|---|
| Debugging/observability | Get detailed events for every component in the chain |
| Filtering by event type | Filter for specific events like on_chat_model_stream, on_tool_start |
| Run metadata access | Access run IDs, names, tags for each component |
| LCEL migration | When migrating apps that rely on callback-based streaming |
| Simple model streaming | Direct model streaming without LangGraph complexity |
const streamEvents = model.streamEvents(messages, {
version: 'v2',
});
| Event | Description |
|---|---|
on_chat_model_start |
Model invocation started |
on_chat_model_stream |
Token chunk received |
on_chat_model_end |
Model completed with full message |
on_tool_start |
Tool execution started |
on_tool_end |
Tool execution completed |
on_chain_start/end |
Chain/graph lifecycle events |
For most LangGraph applications, graph.stream() with appropriate streamMode options is recommended. Use streamEvents() when you need the additional granularity for debugging or when working with pure LangChain (non-LangGraph) applications.
To learn more about LangChain, LangGraph, OpenAI, Next.js, and the AI SDK take a look at the following resources: