The Model Context Protocol (MCP) is an open-source protocol developed by Anthropic that addresses a fundamental challenge in Large Language Model (LLM) applications: their isolation from external data sources and tools. This comprehensive tutorial will guide you through implementing MCP with LangChain, providing you with the knowledge to create sophisticated applications that leverage both technologies effectively.

Understanding MCP and Its Purpose
Model Context Protocol aims to standardize how LLM-based applications connect to diverse external systems. Think of MCP as the "USB-C for AI" - a universal interface enabling seamless, secure, and scalable data exchange between LLMs/AI agents and external resources.
MCP employs a client-server architecture:
- MCP Hosts: AI applications that need to access external data
- MCP Servers: Data or tool providers that supply information to the hosts
The protocol facilitates a clear separation of concerns, allowing developers to create modular, reusable connectors while maintaining robust security through granular permission controls.
Technical Architecture
MCP's architecture consists of three primary components:
- Server: The MCP server exposes tools and data sources through a standardized API
- Client: The client application communicates with the server to access tools and data
- Adapter: LangChain provides adapters that simplify the integration between MCP servers and LLM applications
The communication flow follows this pattern:
- LangChain application requests data/tool execution
- MCP adapter transforms the request into the MCP protocol format
- Server processes the request and returns results
- Adapter transforms the response back into a format usable by LangChain
Prerequisites
Before we begin, ensure you have the following:
- Python 3.8+ installed
- OpenAI API key (for using GPT models with LangChain)
- Basic familiarity with LangChain concepts
- Terminal access (examples shown on macOS)
Setting Up the Environment
First, let's create and configure our development environment:
# Create a virtual environment
python3 -m venv MCP_Demo
# Activate the virtual environment
source MCP_Demo/bin/activate
# Install required packages
pip install langchain-mcp-adapters
pip install langchain-openai
# Set your OpenAI API key
export OPENAI_API_KEY=your_api_key
Creating a Simple MCP Server
We'll start by building a basic MCP server that provides mathematical operations. Create a file named math_server.py
:
# math_server.py
from mcp.server.fastmcp import FastMCP
mcp = FastMCP("Math")
@mcp.tool()
def add(a: int, b: int) -> int:
"""Add two numbers"""
return a + b
@mcp.tool()
def multiply(a: int, b: int) -> int:
"""Multiply two numbers"""
return a * b
if __name__ == "__main__":
mcp.run(transport="stdio")
This server exposes two mathematical tools: add
and multiply
. The FastMCP
class simplifies server creation, handling protocol details automatically. Each function decorated with @mcp.tool()
becomes available to clients, with documentation derived from docstrings.
Implementing the LangChain Client
Next, create a LangChain client to interact with the MCP server. Save this as client.py
:
# Create server parameters for stdio connection
from mcp import ClientSession, StdioServerParameters
from mcp.client.stdio import stdio_client
from langchain_mcp_adapters.tools import load_mcp_tools
from langgraph.prebuilt import create_react_agent
from langchain_openai import ChatOpenAI
import asyncio
model = ChatOpenAI(model="gpt-4o")
# Configure server parameters
server_params = StdioServerParameters(
command="python",
# Specify the path to your server file
args=["math_server.py"],
)
async def run_agent():
async with stdio_client(server_params) as (read, write):
async with ClientSession(read, write) as session:
# Initialize the connection
await session.initialize()
# Load MCP tools into LangChain format
tools = await load_mcp_tools(session)
# Create and run the agent
agent = create_react_agent(model, tools)
agent_response = await agent.ainvoke({"messages": "what's (3 + 5) x 12?"})
return agent_response
# Run the async function
if __name__ == "__main__":
result = asyncio.run(run_agent())
print(result)
This client establishes a connection to the MCP server, loads the available tools, and creates a LangChain agent that can use these tools to solve problems.
Running the Example
To run this example:
- Start the MCP server in one terminal tab:
python3 math_server.py
- In another terminal tab, run the client:
python3 client.py
The client will invoke the LangChain agent, which will:
- Parse the question "(3 + 5) x 12"
- Call the
add
tool with arguments 3 and 5 - Get the result 8
- Call the
multiply
tool with arguments 8 and 12 - Return the final answer: 96
Advanced MCP Server Implementation
Let's expand our implementation to create a more sophisticated MCP server that provides database access. This example demonstrates how to build connectors to external systems:
# db_server.py
from mcp.server.fastmcp import FastMCP
import sqlite3
from typing import List, Dict, Any
class DatabaseConnector:
def __init__(self, db_path):
self.conn = sqlite3.connect(db_path)
self.cursor = self.conn.cursor()
def execute_query(self, query: str) -> List[Dict[str, Any]]:
self.cursor.execute(query)
columns = [desc[0] for desc in self.cursor.description]
results = []
for row in self.cursor.fetchall():
results.append({columns[i]: row[i] for i in range(len(columns))})
return results
mcp = FastMCP("DatabaseTools")
db_connector = DatabaseConnector("example.db")
@mcp.tool()
def run_sql_query(query: str) -> List[Dict[str, Any]]:
"""Execute an SQL query on the database and return results"""
try:
return db_connector.execute_query(query)
except Exception as e:
return {"error": str(e)}
if __name__ == "__main__":
mcp.run(transport="stdio")
Integrating Multiple MCP Servers with LangChain
For more complex applications, you might need to integrate multiple MCP servers. Here's how to create a client that connects to multiple servers:
# multi_server_client.py
from mcp import ClientSession, StdioServerParameters
from mcp.client.stdio import stdio_client
from langchain_mcp_adapters.tools import load_mcp_tools
from langgraph.prebuilt import create_react_agent
from langchain_openai import ChatOpenAI
import asyncio
from typing import List, Dict
# Define our server configurations
servers = [
{
"name": "math",
"params": StdioServerParameters(
command="python",
args=["math_server.py"]
)
},
{
"name": "database",
"params": StdioServerParameters(
command="python",
args=["db_server.py"]
)
}
]
async def connect_to_server(server_config):
"""Connect to a single MCP server and load its tools"""
name = server_config["name"]
params = server_config["params"]
read, write = await stdio_client(params).__aenter__()
session = ClientSession(read, write)
await session.__aenter__()
await session.initialize()
tools = await load_mcp_tools(session)
return {
"name": name,
"session": session,
"tools": tools,
"cleanup": lambda: asyncio.gather(
session.__aexit__(None, None, None),
stdio_client(params).__aexit__(None, None, None)
)
}
async def run_multi_server_agent():
# Connect to all servers
connections = await asyncio.gather(
*[connect_to_server(server) for server in servers]
)
try:
# Collect all tools from all servers
all_tools = []
for connection in connections:
all_tools.extend(connection["tools"])
# Create the agent with all tools
model = ChatOpenAI(model="gpt-4o")
agent = create_react_agent(model, all_tools)
# Run the agent with a complex query that might use multiple servers
response = await agent.ainvoke({
"messages": "Find the customers who've spent more than the average order value and calculate their total spend."
})
return response
finally:
# Clean up all connections
for connection in connections:
await connection["cleanup"]()
# Run the multi-server agent
if __name__ == "__main__":
result = asyncio.run(run_multi_server_agent())
print(result)
Error Handling and Fallback Strategies
Robust MCP implementations should include error handling. Here's an enhanced version of the client that demonstrates error handling and fallback strategies:
# robust_client.py
async def run_agent_with_fallbacks():
try:
# Attempt primary connection
async with stdio_client(server_params) as (read, write):
async with ClientSession(read, write) as session:
try:
await session.initialize()
tools = await load_mcp_tools(session)
agent = create_react_agent(model, tools)
return await agent.ainvoke({"messages": "what's (3 + 5) x 12?"})
except Exception as e:
print(f"Error using MCP tools: {e}")
# Fallback to direct model call without tools
return await model.ainvoke([
HumanMessage(content="what's (3 + 5) x 12?")
])
except Exception as connection_error:
print(f"Connection error: {connection_error}")
# Ultimate fallback
return {"error": "Could not establish connection to MCP server"}
Security Considerations
When implementing MCP with LangChain, consider these security best practices:
- Input Validation: Always validate inputs to MCP tools to prevent injection attacks
- Tool Permissions: Implement fine-grained permissions for each tool
- Rate Limiting: Apply rate limits to prevent abuse of tools
- Authentication: Implement proper authentication between clients and servers
Here's an example of implementing tool permissions:
from mcp.server.fastmcp import FastMCP, Permission
mcp = FastMCP("SecureTools")
# Define permission levels
READ_PERMISSION = Permission("read", "Can read data")
WRITE_PERMISSION = Permission("write", "Can modify data")
@mcp.tool(permissions=[READ_PERMISSION])
def get_data(key: str) -> str:
"""Get data by key (requires read permission)"""
# Implementation...
return f"Data for {key}"
@mcp.tool(permissions=[WRITE_PERMISSION])
def update_data(key: str, value: str) -> bool:
"""Update data (requires write permission)"""
# Implementation...
return True
Performance Optimization
For production deployments, consider these performance optimizations:
- Connection Pooling: Reuse MCP connections rather than creating new ones for each request
- Batch Processing: Group multiple tool calls when possible
- Asynchronous Processing: Use asyncio to handle multiple requests concurrently
Example of connection pooling:
class MCPConnectionPool:
def __init__(self, max_connections=10):
self.available_connections = asyncio.Queue(max_connections)
self.max_connections = max_connections
self.current_connections = 0
async def initialize(self):
# Pre-create some connections
for _ in range(3): # Start with 3 connections
await self._create_connection()
async def _create_connection(self):
if self.current_connections >= self.max_connections:
raise Exception("Maximum connections reached")
read, write = await stdio_client(server_params).__aenter__()
session = await ClientSession(read, write).__aenter__()
await session.initialize()
self.current_connections += 1
await self.available_connections.put(session)
async def get_connection(self):
if self.available_connections.empty() and self.current_connections < self.max_connections:
await self._create_connection()
return await self.available_connections.get()
async def release_connection(self, connection):
await self.available_connections.put(connection)
Testing MCP Implementations
Thorough testing is crucial for reliable MCP implementations. Here's a testing approach using pytest:
# test_mcp.py
import pytest
import asyncio
from mcp import ClientSession, StdioServerParameters
from mcp.client.stdio import stdio_client
from langchain_mcp_adapters.tools import load_mcp_tools
@pytest.fixture
async def mcp_session():
server_params = StdioServerParameters(
command="python",
args=["math_server.py"],
)
read, write = await stdio_client(server_params).__aenter__()
session = ClientSession(read, write)
await session.__aenter__()
await session.initialize()
yield session
await session.__aexit__(None, None, None)
await stdio_client(server_params).__aexit__(None, None, None)
@pytest.mark.asyncio
async def test_add_tool(mcp_session):
tools = await load_mcp_tools(mcp_session)
add_tool = next(tool for tool in tools if tool.name == "add")
result = await add_tool.invoke({"a": 5, "b": 7})
assert result == 12
@pytest.mark.asyncio
async def test_multiply_tool(mcp_session):
tools = await load_mcp_tools(mcp_session)
multiply_tool = next(tool for tool in tools if tool.name == "multiply")
result = await multiply_tool.invoke({"a": 6, "b": 8})
assert result == 48
Conclusion
The Model Context Protocol provides a powerful framework for connecting LangChain applications with external tools and data sources. By standardizing these connections, MCP enables developers to create sophisticated AI agents that can seamlessly interact with their environment.
The combination of LangChain's agent capabilities with MCP's connectivity creates a foundation for building truly powerful, context-aware applications. As the MCP ecosystem continues to grow, we can expect more pre-built servers and tools to emerge, further simplifying the development process.
This tutorial has covered the fundamental concepts and implementation details of using MCP with LangChain, from basic setup to advanced patterns like connection pooling and error handling. By following these practices, you can create robust, production-ready applications that leverage the best of both technologies.
For further exploration, consider investigating the growing ecosystem of MCP servers available on GitHub, or contribute your own servers to the community. The future of AI agents lies in their ability to effectively leverage external tools and data, and MCP is a significant step toward making that vision a reality.