Apidog

All-in-one Collaborative API Development Platform

API Design

API Documentation

API Debugging

API Mocking

API Automated Testing

How to Connect Any LLM to Any MCP Server Using MCP-Use

Learn to connect any LLM to MCP servers using MCP-Use in this beginner’s guide. Covers setup, API keys, Scalar for spec validation, and more!

Ashley Goolam

Ashley Goolam

Updated on April 18, 2025

Would you like to hook up your favorite large language model (LLM) to a toolbox of superpowers, like web scraping or file ops, without getting tangled in code? That’s where MCP-Use comes in—a slick, open-source Python library that lets you connect any LLM to any MCP server with ease. Think of it as a universal adapter for your API-powered AI dreams! In this beginner’s guide, I will be walking you through how to use MCP-Use to bridge LLMs and Model Context Protocol (MCP) servers. Whether you’re a coder or just curious, this tutorial’s got you covered. Ready to make your LLM a multitasking rockstar? Let’s dive in!

💡
Before we get rolling with MCP-Use, let’s give a quick shoutout to Apidog—an awesome tool for API lovers! It makes designing, testing, and documenting APIs a total breeze, perfect for when you’re building projects with MCP servers. Check it out at apidog.com—it’s a dev’s best friend! 
button

Now, let’s jump into the MCP-Use magic…

What is MCP-Use? Your AI-to-Tool Connector

So, what’s MCP-Use? It’s a Python library that acts like a bridge, letting any LLM (think Claude, GPT-4o, or DeepSeek) talk to MCP servers—specialized tools that give AI access to stuff like web browsers, file systems, or even Airbnb searches. Built on LangChain adapters, MCP-Use simplifies connecting your LLM’s API to these servers, so you can build custom agents that do more than just chat. Users call it “the open-source way to build local MCP clients,” and they’re not wrong—it’s 100% free and flexible.

mcp-use display image

Why bother? MCP servers are like USB ports for AI, letting your LLM call functions, fetch data, or automate tasks via standardized API-like interfaces. With MCP-Use, you don’t need to wrestle with custom integrations—just plug and play. Let’s get you set up!

Installing MCP-Use: Quick and Painless

Getting MCP-Use running is a snap, especially if you’re comfy with Python. The GitHub repo (github.com/pietrozullo/mcp-use) lays it out clearly. Here’s how to start.

Step 1: Prerequisites

You’ll need:

  • Python: Version 3.11 or higher. Check with python --version. No Python? Grab it from python.org.
python download
  • pip: Python’s package manager (usually comes with Python).
  • Git (optional): For cloning the repo if you want the latest code.
  • API Keys: For premium LLMs like OpenAI or Anthropic. We’ll cover this later.

Step 2: Install MCP-Use

Let’s use pip in a virtual environment to keep things tidy:

Create a Project Folder:

mkdir mcp-use-project
cd mcp-use-project

Set Up a Virtual Environment:

python -m venv mcp-env

Activate it:

  • Mac/Linux: source mcp-env/bin/activate
  • Windows: mcp-env\Scripts\activate

Install MCP-Use:

pip install mcp-use

Or, if you want the bleeding-edge version, clone the repo:

git clone https://github.com/pietrozullo/mcp-use.git
cd mcp-use
pip install .

Add LangChain Providers:
MCP-Use relies on LangChain for LLM connections. Install the provider for your LLM:

langchain chat models

Verify Installation:
Run:

python -c "import mcp_use; print(mcp_use.__version__)"

You should see a version number (e.g., 0.42.1 as of April 2025). If not, double-check your Python version or pip.

That’s it! MCP-Use is ready to connect your LLM to MCP servers. Took me about five minutes—how’s your setup going?

How to build a Custom MCP Server for Cursor (Open Source)
Learn how to create and deploy custom MCP servers for Cursor IDE. From setup to advanced tool development, master AI-powered workflow automation.

Connecting an LLM to an MCP Server with MCP-Use

Now, let’s make the magic happen: connecting an LLM to an MCP server using MCP-Use. We’ll use a simple example—hooking up OpenAI’s GPT-4o to a Playwright MCP server for web browsing.

Step 1: Get Your LLM API Key

For GPT-4o, grab an API key from platform.openai.com. Sign up, create a key, and save it securely. Other LLMs like Claude (via console.anthropic.com) or DeepSeek (at the deepseek platform) will work too.

openai developer platform

Step 2: Set Up Environment Variables

MCP-Use loves .env files for secure API key storage. Create a .env file in your project folder:

touch .env

Add your key and save it:

OPENAI_API_KEY=sk-xxx

Important: keep your API_Keys out of Git by adding .env file to .gitignore.

Step 3: Configure the MCP Server

MCP servers provide tools your LLM can use. We’ll use the Playwright MCP server for browser automation. Create a config file called browser_mcp.json:

{
  "mcpServers": {
    "playwright": {
      "command": "npx",
      "args": ["@playwright/mcp@latest"],
      "env": {
        "DISPLAY": ":1"
      }
    }
  }
}

This tells MCP-Use to run Playwright’s MCP server. Save it in your project folder.

Step 4: Write Your First MCP-Use Script

Let’s create a Python script to connect GPT-4o to the Playwright server and find a restaurant. Create mcp_example.py:

import asyncio
import os
from dotenv import load_dotenv
from langchain_openai import ChatOpenAI
from mcp_use import MCPAgent, MCPClient

async def main():
    # Load environment variables
    load_dotenv()

    # Create MCPClient from config file
    client = MCPClient.from_config_file("browser_mcp.json")

    # Create LLM (ensure model supports tool calling)
    llm = ChatOpenAI(model="gpt-4o")

    # Create agent
    agent = MCPAgent(llm=llm, client=client, max_steps=30)

    # Run a query
    result = await agent.run("Find the best restaurant in San Francisco")
    print(f"\nResult: {result}")

if __name__ == "__main__":
    asyncio.run(main())

This script:

  • Loads your API key from .env.
  • Sets up an MCP client with the Playwright server.
  • Connects GPT-4o via LangChain.
  • Runs a query to search for restaurants.

Step 5: Run It

Make sure your virtual environment is active, then:

python mcp_example.py

MCP-Use will spin up the Playwright server, let GPT-4o browse the web, and print something like: “Result: The best restaurant in San Francisco is Gary Danko, known for its exquisite tasting menu.” (Your results may vary!) I ran this and got a solid recommendation in under a minute—pretty cool, right?

Connecting to Multiple MCP Servers

MCP-Use shines when you connect to multiple servers for complex tasks. Let’s add an Airbnb MCP server to our config for accommodation searches. Update browser_mcp.json:

{
  "mcpServers": {
    "playwright": {
      "command": "npx",
      "args": ["@playwright/mcp@latest"],
      "env": {
        "DISPLAY": ":1"
      }
    },
    "airbnb": {
      "command": "npx",
      "args": ["-y", "@openbnb/mcp-server-airbnb", "--ignore-robots-txt"]
    }
  }
}

Rerun mcp_example.py with a new query:

result = await agent.run("Find a restaurant and an Airbnb in San Francisco")

MCP-Use lets the LLM use both servers—Playwright for restaurant searches, Airbnb for lodging. The agent decides which server to call, making your AI super versatile.

How to Use VSCode MCP Server
This tutorial will guide you through everything you need to know about using MCP servers with VSCode, from initial setup to advanced configurations and troubleshooting.

Why MCP-Use is Awesome for Beginners

MCP-Use is a beginner’s dream because:

  • Simple Setup: One pip install and a short script get you going.
  • Flexible: Works with any LLM and MCP server, from Claude to GitHub’s issue tracker.
  • Open-Source: Free and customizable, with a welcoming GitHub community.

Compared to custom API integrations, MCP-Use is way less headache, letting you focus on building cool stuff.

Pro Tips for MCP-Use Success

  • Check Model Compatibility: Only LLMs with tool-calling (like GPT-4o or Claude 3.7 Sonnet) work.
  • Use Scalar for Specs: Validate server API specs to avoid surprises.
  • Explore MCP Servers: Browse mcp.so for servers like Firecrawl (web scraping) or ElevenLabs (text-to-speech).
  • Join the Community: Report bugs or suggest features on the MCP-Use GitHub.

Conclusion: Your MCP-Use Adventure Awaits

Congrats—you’re now ready to supercharge any LLM with MCP-Use! From connecting GPT-4o to a Playwright server, you’ve got the tools to build AI agents that browse, search, and more. Try adding a GitHub MCP server next or ask your agent to plan a whole trip. The MCP-Use repo has more examples, and the MCP community’s buzzing on X. And for extra API flair, don't forget to check out apidog.com.

button
Apidog Ui image

A Beginner's Guide to Use FastMCPViewpoint

A Beginner's Guide to Use FastMCP

The landscape of Large Language Models (LLMs) is evolving rapidly, moving beyond simple text generation towards complex interactions with external systems and data sources. Facilitating this interaction requires a standardized approach, a common language for LLMs to request information and trigger actions. This is where the Model Context Protocol (MCP) comes in, designed as a universal standard – often likened to "the USB-C port for AI" – enabling seamless communication between LLMs and the reso

Medy Evrard

April 19, 2025

Testing the Brave MCP Server (with Brave Search API), Here're My Thoughts:Viewpoint

Testing the Brave MCP Server (with Brave Search API), Here're My Thoughts:

Learn to create an MCP server with Brave Search API in this beginner friendly guide. Connect AI models to real-time web and local search with easy steps!

Ashley Goolam

April 19, 2025

How to Use Cursor Tab Completion FeatureViewpoint

How to Use Cursor Tab Completion Feature

This tutorial will guide you through understanding, using, and mastering Cursor Tab, transforming it from a neat feature into an indispensable part of your coding arsenal.

Mark Ponomarev

April 18, 2025