The Model Context Protocol (MCP) aims to standardize how AI models interact with external tools and services. It defines a common interface, allowing different models and tool providers to communicate effectively. However, integrating these MCP-compliant tools directly into existing AI frameworks like LangChain requires adaptation.
This is where the langchain-mcp-adapters
library comes in. It acts as a crucial bridge, seamlessly translating MCP tools into a format that LangChain and its powerful agent framework, LangGraph, can understand and utilize. This library provides a lightweight wrapper, enabling developers to leverage the growing ecosystem of MCP tools within their LangChain applications.
Key features include:
- MCP Tool Conversion: Automatically converts MCP tools into LangChain-compatible
BaseTool
objects. - Multi-Server Client: Provides a robust client (
MultiServerMCPClient
) capable of connecting to multiple MCP servers simultaneously, aggregating tools from various sources. - Transport Flexibility: Supports common MCP communication transports like standard input/output (
stdio
) and Server-Sent Events (sse
).
This tutorial will guide you through setting up MCP servers, connecting to them using the adapter library, and integrating the loaded tools into a LangGraph agent.
Want an integrated, All-in-One platform for your Developer Team to work together with maximum productivity?
Apidog delivers all your demans, and replaces Postman at a much more affordable price!

What is MCP Server? How Does It Work?
Understanding a few core concepts is essential before diving into the examples:
MCP Server:
- An MCP server exposes tools (functions) that an AI model can call.
- The
mcp
library (a dependency oflangchain-mcp-adapters
) provides tools likeFastMCP
to easily create these servers in Python. - Tools are defined using the
@mcp.tool()
decorator, which automatically infers the input schema from type hints and docstrings. - Servers can also define prompts using
@mcp.prompt()
, providing structured conversational starters or instructions. - Servers are run specifying a transport mechanism (e.g.,
mcp.run(transport="stdio")
ormcp.run(transport="sse")
).stdio
runs the server as a subprocess communicating via standard input/output, whilesse
typically runs a simple web server for communication.
MCP Client (langchain-mcp-adapters
):
- The client's role is to connect to one or more MCP servers.
- It handles the communication protocol details (stdio, sse).
- It fetches the list of available tools and their definitions (name, description, input schema) from the server(s).
- The
MultiServerMCPClient
class is the primary way to manage connections, especially when dealing with multiple tool servers.
Tool Conversion:
- MCP tools have their own definition format. LangChain uses its
BaseTool
class structure. - The
langchain-mcp-adapters
library provides functions likeload_mcp_tools
(found inlangchain_mcp_adapters.tools
) which connect to a server via an activeClientSession
, list the MCP tools, and wrap each one into a LangChainStructuredTool
. - This wrapper handles invoking the actual MCP tool call (
session.call_tool
) when the LangChain agent decides to use the tool and correctly formats the response.
Prompt Conversion:
- Similar to tools, MCP prompts can be fetched using
load_mcp_prompt
(fromlangchain_mcp_adapters.prompts
). - This function retrieves the prompt structure from the MCP server and converts it into a list of LangChain
HumanMessage
orAIMessage
objects, suitable for initializing or guiding a conversation.
Install Langchain-mcp-adapter
First, install the necessary packages:
pip install langchain-mcp-adapters langgraph langchain-openai # Or your preferred LangChain LLM integration
You'll also need to configure API keys for your chosen language model provider, typically by setting environment variables:
export OPENAI_API_KEY=<your_openai_api_key>
# or export ANTHROPIC_API_KEY=<...> etc.
Build a Quick Single MCP Server with langchain-mcp-adapters
Let's build a simple example: an MCP server providing math functions and a LangGraph agent using those functions.
Step 1: Create the MCP Server (math_server.py
)
# math_server.py
from mcp.server.fastmcp import FastMCP
# Initialize the MCP server with a name
mcp = FastMCP("Math")
@mcp.tool()
def add(a: int, b: int) -> int:
"""Add two numbers"""
print(f"Executing add({a}, {b})") # Server-side log
return a + b
@mcp.tool()
def multiply(a: int, b: int) -> int:
"""Multiply two numbers"""
print(f"Executing multiply({a}, {b})") # Server-side log
return a * b
# Example prompt definition
@mcp.prompt()
def configure_assistant(skills: str) -> list[dict]:
"""Configures the assistant with specified skills."""
return [
{
"role": "assistant", # Corresponds to AIMessage
"content": f"You are a helpful assistant. You have the following skills: {skills}. Always use only one tool at a time.",
}
]
if __name__ == "__main__":
# Run the server using stdio transport
print("Starting Math MCP server via stdio...")
mcp.run(transport="stdio")
Save this code as math_server.py
.
Step 2: Create the Client and Agent (client_app.py
)
import asyncio
import os
from mcp import ClientSession, StdioServerParameters
from mcp.client.stdio import stdio_client
from langchain_mcp_adapters.tools import load_mcp_tools
from langgraph.prebuilt import create_react_agent
from langchain_openai import ChatOpenAI
# --- IMPORTANT: Update this path ---
# Get the absolute path to the math_server.py file
current_dir = os.path.dirname(os.path.abspath(__file__))
math_server_script_path = os.path.join(current_dir, "math_server.py")
# ---
async def main():
model = ChatOpenAI(model="gpt-4o") # Or your preferred model
# Configure parameters to run the math_server.py script
server_params = StdioServerParameters(
command="python", # The command to execute
args=[math_server_script_path], # Arguments (the script path)
# cwd=..., env=... # Optional working dir and environment vars
)
print("Connecting to MCP server...")
# Establish connection using stdio_client context manager
async with stdio_client(server_params) as (read, write):
# Create a ClientSession using the read/write streams
async with ClientSession(read, write) as session:
print("Initializing session...")
# Handshake with the server
await session.initialize()
print("Session initialized.")
print("Loading MCP tools...")
# Fetch MCP tools and convert them to LangChain tools
tools = await load_mcp_tools(session)
print(f"Loaded tools: {[tool.name for tool in tools]}")
# Create a LangGraph ReAct agent using the model and loaded tools
agent = create_react_agent(model, tools)
print("Invoking agent...")
# Run the agent
inputs = {"messages": [("human", "what's (3 + 5) * 12?")]}
async for event in agent.astream_events(inputs, version="v1"):
print(event) # Stream events for observability
# Or get final response directly
# final_response = await agent.ainvoke(inputs)
# print("Agent response:", final_response['messages'][-1].content)
if __name__ == "__main__":
asyncio.run(main())
Save this as client_app.py
in the same directory as math_server.py
.
To Run:
Execute the client script:
python client_app.py
The client script will automatically start math_server.py
as a subprocess, connect to it, load the add
and multiply
tools, and use the LangGraph agent to solve the math problem by calling those tools via the MCP server. You'll see logs from both the client and the server.
Connecting to Multiple MCP Servers
Often, you'll want to combine tools from different specialized servers. MultiServerMCPClient
makes this straightforward.
Step 1: Create Another Server (weather_server.py
)
Let's create a weather server that runs using SSE transport.
# weather_server.py
from mcp.server.fastmcp import FastMCP
import uvicorn # Needs: pip install uvicorn
mcp = FastMCP("Weather")
@mcp.tool()
async def get_weather(location: str) -> str:
"""Get weather for location."""
print(f"Executing get_weather({location})")
# In a real scenario, this would call a weather API
return f"It's always sunny in {location}"
if __name__ == "__main__":
# Run the server using SSE transport (requires an ASGI server like uvicorn)
# The mcp library implicitly creates a FastAPI app for SSE.
# By default, it runs on port 8000 at the /sse endpoint.
print("Starting Weather MCP server via SSE on port 8000...")
# uvicorn.run(mcp.app, host="0.0.0.0", port=8000) # You can run manually
mcp.run(transport="sse", host="0.0.0.0", port=8000) # Or use mcp.run convenience
Save this as weather_server.py
.
Step 2: Update Client to Use MultiServerMCPClient
(multi_client_app.py
)
import asyncio
import os
from langchain_mcp_adapters.client import MultiServerMCPClient
from langgraph.prebuilt import create_react_agent
from langchain_openai import ChatOpenAI
# --- IMPORTANT: Update paths ---
current_dir = os.path.dirname(os.path.abspath(__file__))
math_server_script_path = os.path.join(current_dir, "math_server.py")
# Weather server runs separately, connect via URL
# ---
async def main():
model = ChatOpenAI(model="gpt-4o")
# Define connections for multiple servers
server_connections = {
"math_service": { # Unique name for this connection
"transport": "stdio",
"command": "python",
"args": [math_server_script_path],
# Add other StdioConnection params if needed (env, cwd, etc.)
},
"weather_service": { # Unique name for this connection
"transport": "sse",
"url": "http://localhost:8000/sse", # URL where weather_server is running
# Add other SSEConnection params if needed (headers, timeout, etc.)
}
}
print("Connecting to multiple MCP servers...")
# Use MultiServerMCPClient context manager
async with MultiServerMCPClient(server_connections) as client:
print("Connections established.")
# Get *all* tools from *all* connected servers
all_tools = client.get_tools()
print(f"Loaded tools: {[tool.name for tool in all_tools]}")
# Create agent with the combined tool list
agent = create_react_agent(model, all_tools)
# --- Interact with the agent ---
print("\nInvoking agent for math query...")
math_inputs = {"messages": [("human", "what's (3 + 5) * 12?")]}
math_response = await agent.ainvoke(math_inputs)
print("Math Response:", math_response['messages'][-1].content)
print("\nInvoking agent for weather query...")
weather_inputs = {"messages": [("human", "what is the weather in nyc?")]}
weather_response = await agent.ainvoke(weather_inputs)
print("Weather Response:", weather_response['messages'][-1].content)
# --- Example: Getting a prompt ---
# print("\nGetting math server prompt...")
# prompt_messages = await client.get_prompt(
# server_name="math_service", # Use the name defined in connections
# prompt_name="configure_assistant",
# arguments={"skills": "basic arithmetic"}
# )
# print("Prompt:", prompt_messages)
if __name__ == "__main__":
# Start the weather server first in a separate terminal:
# python weather_server.py
# Then run this client script:
asyncio.run(main())
Save this as multi_client_app.py
.
To Run:
- Start the weather server in one terminal:
python weather_server.py
- Run the multi-client app in another terminal:
python multi_client_app.py
The MultiServerMCPClient
will start the math_server.py
subprocess (stdio) and connect to the running weather_server.py
(sse). It aggregates tools (add
, multiply
, get_weather
) which are then available to the LangGraph agent.
Integration with LangGraph API Server
You can deploy a LangGraph agent using MCP tools as a persistent API service using langgraph deploy
. The key is to manage the MultiServerMCPClient
lifecycle correctly within the LangGraph application context.
Create a graph.py
file:
# graph.py
from contextlib import asynccontextmanager
import os
from langchain_mcp_adapters.client import MultiServerMCPClient
from langgraph.prebuilt import create_react_agent
from langchain_openai import ChatOpenAI # Or Anthropic, etc.
# --- IMPORTANT: Update paths ---
# Assuming servers are relative to where the LangGraph server runs
math_server_script_path = os.path.abspath("math_server.py")
# ---
# Define connections (ensure paths/URLs are correct for the server environment)
server_connections = {
"math_service": {
"transport": "stdio",
"command": "python",
"args": [math_server_script_path],
},
"weather_service": {
"transport": "sse",
"url": "http://localhost:8000/sse", # Weather server must be running independently
}
}
model = ChatOpenAI(model="gpt-4o")
# Use an async context manager to handle client setup/teardown
@asynccontextmanager
async def lifespan(_app): # LangGraph expects this structure for lifespan management
async with MultiServerMCPClient(server_connections) as client:
print("MCP Client initialized within lifespan.")
# Create the agent *inside* the context where the client is active
agent = create_react_agent(model, client.get_tools())
yield {"agent": agent} # Make the agent available
# No need for a separate main graph definition if lifespan yields it
Configure your langgraph.json
(or pyproject.toml
under [tool.langgraph]
) to use this graph definition with the lifespan manager:
// langgraph.json (example)
{
"dependencies": ["."], // Or specify required packages
"graphs": {
"my_mcp_agent": {
"entrypoint": "graph:agent", // Refers to the key yielded by lifespan
"lifespan": "graph:lifespan"
}
}
}
Now, when you run langgraph up
, the lifespan
function will execute, starting the MultiServerMCPClient
(and the stdio math server). The agent created within this context will be served by LangGraph. Remember the SSE weather server still needs to be run separately.
Server Transports (stdio vs. SSE)
stdio
:
- Communication: Via the server process's standard input and output streams.
- Pros: Simple setup for local development; the client manages the server lifecycle. No networking involved.
- Cons: Tightly coupled; less suitable for distributed systems or non-Python servers. Requires
command
andargs
configuration.
sse
(Server-Sent Events):
- Communication: Over HTTP using the SSE protocol. The server runs as a web service (often using FastAPI/Uvicorn implicitly).
- Pros: Standard web protocol; suitable for networked/remote servers, potentially implemented in different languages. Server runs independently.
- Cons: Requires the server to be running separately. Needs
url
configuration.
Choose the transport based on your deployment needs.
Advanced Client Configuration for the langchain-mcp-adapters Setup
The StdioConnection
and SSEConnection
dictionaries within MultiServerMCPClient
accept additional optional parameters for finer control:
- Stdio:
env
(custom environment variables for the subprocess),cwd
(working directory),encoding
,encoding_error_handler
,session_kwargs
(passed tomcp.ClientSession
). - SSE:
headers
(custom HTTP headers),timeout
(HTTP connection timeout),sse_read_timeout
,session_kwargs
.
Refer to the MultiServerMCPClient
definition in langchain_mcp_adapters/client.py
for details.
Conclusion (100 words)
The langchain-mcp-adapters
library effectively bridges the gap between the standardized Model Context Protocol and the flexible LangChain ecosystem. By providing the MultiServerMCPClient
and automatic tool conversion, it allows developers to easily incorporate diverse, MCP-compliant tools into their LangChain agents and LangGraph applications.
The core workflow involves:
- Defining tools (and optionally prompts) in an MCP server using
@mcp.tool()
. - Configuring the
MultiServerMCPClient
with connection details (stdio or sse) for each server. - Using the client context manager (
async with ...
) to connect and fetch tools viaclient.get_tools()
. - Passing the retrieved LangChain-compatible tools to your agent (
create_react_agent
or custom agents).
This enables building powerful, modular AI applications that leverage specialized, external tools through a standardized protocol. Explore the examples and tests within the repository for further insights.