The landscape of Large Language Models (LLMs) is evolving rapidly, moving beyond simple text generation towards complex interactions with external systems and data sources. Facilitating this interaction requires a standardized approach, a common language for LLMs to request information and trigger actions. This is where the Model Context Protocol (MCP) comes in, designed as a universal standard – often likened to "the USB-C port for AI" – enabling seamless communication between LLMs and the resources they need.
While MCP provides the specification, building servers and clients that adhere to it can involve significant boilerplate code and protocol management. This is where FastMCP shines. FastMCP is a high-level, Pythonic framework designed to drastically simplify the creation of MCP servers and clients. It handles the underlying complexities of the protocol, allowing developers to focus on defining the valuable tools, data resources, and interaction patterns they want to expose to LLMs.
Want an integrated, All-in-One platform for your Developer Team to work together with maximum productivity?
Apidog delivers all your demans, and replaces Postman at a much more affordable price!

What is the Model Context Protocol (MCP)?
Before diving deeper into FastMCP, it's essential to grasp the core concepts of MCP itself. MCP defines a standardized way for LLM applications (clients) to interact with external systems (servers). An MCP server can expose several key components:
- Tools: These are essentially functions that an LLM can request the server to execute. Think of them like POST endpoints in a traditional API. They perform actions, potentially interact with other systems (databases, APIs, hardware), and return results. For example, a tool could send an email, query a database, or perform a calculation.
- Resources: These expose data that an LLM can read or retrieve. Similar to GET endpoints, resources provide information to enrich the LLM's context. This could be anything from configuration files and user profiles to real-time data streams.
- Prompts: These are reusable templates for structuring interactions with the LLM. They help guide the conversation and ensure consistent outputs for specific tasks.
- Context: Servers can provide contextual information, including instructions on how to best interact with the available tools and resources.
MCP aims to create a robust and secure ecosystem where LLMs can reliably access and utilize external capabilities.
Why Choose FastMCP?

While you could implement the MCP specification directly using lower-level SDKs, FastMCP offers compelling advantages, particularly for Python developers:
- 🚀 Fast Development: Its high-level interface significantly reduces the amount of code needed, accelerating the development process. Often, defining a tool or resource is as simple as decorating a standard Python function.
- 🍀 Simplicity: FastMCP abstracts away the complex details of server setup, protocol handling, content types, and error management, minimizing boilerplate.
- 🐍 Pythonic: Designed with Python best practices in mind, it feels natural and intuitive for developers familiar with the language, leveraging features like type hints and decorators.
- 🔍 Complete: FastMCP aims to provide a comprehensive implementation of the core MCP specification, ensuring compatibility and access to the protocol's full potential.
FastMCP version 1 proved highly successful and is now integrated into the official MCP Python SDK. Version 2 builds upon this foundation, introducing advanced features focused on simplifying server interactions, such as flexible clients, server proxying, and composition patterns.
How to Install FastMCP
Getting FastMCP set up in your Python environment is straightforward. The recommended method uses uv
, a fast Python package installer and resolver.
1. Using uv
(Recommended):
If you're managing dependencies for a project, add FastMCP using:
uv add fastmcp
Alternatively, install it directly into your environment:
uv pip install fastmcp
2. Using pip
:
If you prefer using pip
, you can install FastMCP with:
pip install fastmcp
3. Verifying Installation:
After installation, you can verify that FastMCP is correctly installed and check its version, along with the underlying MCP SDK version and your Python environment details, by running:
fastmcp version
You should see output similar to this:
$ fastmcp version
FastMCP version: 0.4.2.dev41+ga077727.d20250410
MCP version: 1.6.0
Python version: 3.12.2
Platform: macOS-15.3.1-arm64-arm-64bit
FastMCP root path: ~/Developer/fastmcp
4. Installing for Development:
If you intend to contribute to the FastMCP project itself, you'll want to set up a development environment:
git clone https://github.com/jlowin/fastmcp.git
cd fastmcp
uv sync
This clones the repository, navigates into the directory, and uses uv sync
to install all necessary dependencies, including development tools, within a virtual environment. You can then run tests using pytest
.
How to Use FastMCP: Building Your First Server
Now, let's dive into the practical aspects of using FastMCP.
1. Creating a Basic Server Instance:
The core of any FastMCP application is the FastMCP
class. You start by creating an instance of this class.
Create a file named my_server.py
:
# my_server.py
from fastmcp import FastMCP
import asyncio # We'll need this later for the client
# Instantiate the server, giving it a name
mcp = FastMCP(name="My First MCP Server")
print("FastMCP server object created.")
The FastMCP
constructor accepts several helpful arguments:
name
(str, optional): A human-readable name for your server (defaults to "FastMCP"). Useful for identification in logs or client applications.instructions
(str, optional): A description guiding clients on how to interact with the server, explaining its purpose or highlighting key functionalities.lifespan
(callable, optional): An async context manager for handling server startup and shutdown logic (e.g., initializing database connections).tags
(set[str], optional): Tags to categorize the server itself.**settings
: You can pass keyword arguments corresponding toServerSettings
(likeport
,host
,log_level
) directly to the constructor for configuration.
2. Adding Components:
An empty server isn't very useful. Let's add the core MCP components.
Adding a Tool: Tools are functions exposed to the client. Use the @mcp.tool()
decorator. FastMCP uses Python type hints to define the expected input parameters and return type for the client.
# my_server.py (continued)
@mcp.tool()
def greet(name: str) -> str:
"""Returns a simple greeting."""
return f"Hello, {name}!"
@mcp.tool()
def add(a: int, b: int) -> int:
"""Adds two numbers together."""
return a + b
print("Tools 'greet' and 'add' added.")
Adding a Resource: Resources expose data via a URI. Use the @mcp.resource()
decorator, providing the URI string.
# my_server.py (continued)
APP_CONFIG = {"theme": "dark", "version": "1.1", "feature_flags": ["new_dashboard"]}
@mcp.resource("data://config")
def get_config() -> dict:
"""Provides the application configuration."""
return APP_CONFIG
print("Resource 'data://config' added.")
Adding a Resource Template: These are like dynamic resources where parts of the URI act as parameters.
# my_server.py (continued)
USER_PROFILES = {
101: {"name": "Alice", "status": "active"},
102: {"name": "Bob", "status": "inactive"},
}
@mcp.resource("users://{user_id}/profile")
def get_user_profile(user_id: int) -> dict:
"""Retrieves a user's profile by their ID."""
# The {user_id} from the URI is automatically passed as an argument
return USER_PROFILES.get(user_id, {"error": "User not found"})
print("Resource template 'users://{user_id}/profile' added.")
Adding a Prompt: Prompts define reusable interaction patterns.
# my_server.py (continued)
@mcp.prompt("summarize")
async def summarize_prompt(text: str) -> list[dict]:
"""Generates a prompt to summarize the provided text."""
return [
{"role": "system", "content": "You are a helpful assistant skilled at summarization."},
{"role": "user", "content": f"Please summarize the following text:\n\n{text}"}
]
print("Prompt 'summarize' added.")
3. Testing the Server (In-Process):
Before running the server externally, you can test its components directly within the same Python script using the Client
. This is useful for quick checks and unit testing.
# my_server.py (continued)
from fastmcp import Client # Import the client
async def test_server_locally():
print("\n--- Testing Server Locally ---")
# Point the client directly at the server object
client = Client(mcp)
# Clients are asynchronous, so use an async context manager
async with client:
# Call the 'greet' tool
greet_result = await client.call_tool("greet", {"name": "FastMCP User"})
print(f"greet result: {greet_result}")
# Call the 'add' tool
add_result = await client.call_tool("add", {"a": 5, "b": 7})
print(f"add result: {add_result}")
# Read the 'config' resource
config_data = await client.read_resource("data://config")
print(f"config resource: {config_data}")
# Read a user profile using the template
user_profile = await client.read_resource("users://101/profile")
print(f"User 101 profile: {user_profile}")
# Get the 'summarize' prompt structure (doesn't execute the LLM call here)
prompt_messages = await client.get_prompt("summarize", {"text": "This is some text."})
print(f"Summarize prompt structure: {prompt_messages}")
# Run the local test function
# asyncio.run(test_server_locally())
# Commented out for now, we'll focus on running the server next
Note the use of async
and await
. FastMCP clients operate asynchronously, requiring an async
function and using async with client:
to manage the client's lifecycle.
4. Running the Server:
To make your MCP server accessible to external clients (like an LLM application), you need to run it. There are two primary ways:
Standard Python Execution (Recommended for Compatibility):
Add the following if __name__ == "__main__":
block to your my_server.py
file. This is the standard Python practice for making a script executable.
# my_server.py (at the end of the file)
if __name__ == "__main__":
print("\n--- Starting FastMCP Server via __main__ ---")
# This starts the server, typically using the stdio transport by default
mcp.run()
To run the server, execute the script from your terminal:
python my_server.py
This command starts the MCP server, listening for client connections using the default stdio
(standard input/output) transport mechanism. This method ensures your server runs consistently for various clients that expect to execute a Python script.
Using the FastMCP CLI:
FastMCP provides a command-line interface for running servers, which offers more flexibility and control, especially regarding transport options.
# Run the server using stdio (default)
fastmcp run my_server.py:mcp
# Run the server using Server-Sent Events (SSE) on port 8080
fastmcp run my_server.py:mcp --transport sse --port 8080 --host 0.0.0.0
# Run with a different log level
fastmcp run my_server.py:mcp --transport sse --log-level DEBUG
Key points about the CLI:
my_server.py:mcp
: Specifies the file (my_server.py
) and the FastMCP server object within that file (mcp
). If you omit:mcp
, FastMCP will try to automatically find an object namedmcp
,app
, orserver
.- The
if __name__ == "__main__":
block is not required when usingfastmcp run
; the CLI directly finds and runs the specified server object. --transport
: Selects the communication protocol (stdio
,sse
). SSE is common for web-based interactions.--port
,--host
,--log-level
: Configure transport and logging settings.
5. Interacting with a Running Server (Client):
Once your server is running (either via python my_server.py
or fastmcp run
), you can create a separate client script to interact with it.
Create a new file, my_client.py
:
# my_client.py
from fastmcp import Client
import asyncio
async def interact_with_server():
print("--- Creating Client ---")
# Option 1: Connect to a server run via `python my_server.py` (uses stdio)
# client = Client("my_server.py")
# Option 2: Connect to a server run via `fastmcp run ... --transport sse --port 8080`
client = Client("http://localhost:8080") # Use the correct URL/port
print(f"Client configured to connect to: {client.target}")
try:
async with client:
print("--- Client Connected ---")
# Call the 'greet' tool
greet_result = await client.call_tool("greet", {"name": "Remote Client"})
print(f"greet result: {greet_result}")
# Read the 'config' resource
config_data = await client.read_resource("data://config")
print(f"config resource: {config_data}")
# Read user profile 102
profile_102 = await client.read_resource("users://102/profile")
print(f"User 102 profile: {profile_102}")
except Exception as e:
print(f"An error occurred: {e}")
finally:
print("--- Client Interaction Finished ---")
if __name__ == "__main__":
asyncio.run(interact_with_server())
Run this client script while the server is running in another terminal:
python my_client.py
The client will connect to the running server (make sure the Client(...)
target matches how the server is running – file path for stdio
, URL for sse
), execute the tool calls and resource reads, and print the results.
6. Server Configuration (ServerSettings
):
You can fine-tune server behavior using ServerSettings
. Settings can be applied in order of precedence:
- Keyword arguments during
FastMCP
initialization (highest precedence). - Environment variables (prefixed with
FASTMCP_SERVER_
, e.g.,FASTMCP_SERVER_PORT=8888
). - Values loaded from a
.env
file in the working directory. - Default values (lowest precedence).
Example configuring during initialization:
from fastmcp import FastMCP
mcp_configured = FastMCP(
name="ConfiguredServer",
port=8080, # Sets the default SSE port
host="127.0.0.1", # Sets the default SSE host
log_level="DEBUG", # Sets the logging level
on_duplicate_tools="warn" # Warn if tools with the same name are registered (options: 'error', 'warn', 'ignore')
)
# Access settings via the .settings attribute
print(f"Configured Port: {mcp_configured.settings.port}") # Output: 8080
print(f"Duplicate Tool Policy: {mcp_configured.settings.on_duplicate_tools}") # Output: warn
Key configuration options include host
, port
, log_level
, and policies for handling duplicate component names (on_duplicate_tools
, on_duplicate_resources
, on_duplicate_prompts
).
What Else You Can Do with FastMCP?
FastMCP also supports more advanced use cases:
- Composition: Combine multiple FastMCP servers.
main.mount("sub", sub_server)
creates a live link, whilemain.import_server(sub_server)
copies components. This aids modularity. - Proxying: Use
FastMCP.from_client(client)
to create a FastMCP server instance that acts as a proxy for another MCP server (local or remote). This is useful for bridging transports (e.g., exposing a remote SSE server via localstdio
) or adding a unified frontend.
Conclusion
FastMCP significantly lowers the barrier to entry for building powerful, context-aware LLM applications by simplifying the implementation of the Model Context Protocol. Its Pythonic design, focus on reducing boilerplate, and comprehensive feature set make it an excellent choice for developers looking to equip LLMs with custom tools and data access securely and efficiently.
By following the steps outlined above – installing FastMCP, creating a server instance, adding tools and resources using simple decorators, and running the server – you can quickly start building your own MCP-enabled applications. Whether you're creating simple utility tools or complex, data-intensive integrations, FastMCP provides the foundation for robust and scalable LLM interactions.