Using Mistral Agents API with MCP: How Good Is It?

Mark Ponomarev

Mark Ponomarev

28 May 2025

Using Mistral Agents API with MCP: How Good Is It?

Artificial Intelligence (AI) is rapidly moving beyond simply generating text or recognizing images. The next frontier is about AI that can take action, solve problems, and interact with the world in meaningful ways. Mistral AI, a prominent name in the field, has taken a significant step in this direction with its Mistral Agents API. This powerful toolkit allows developers to build sophisticated AI agents that can do much more than traditional language models.

At its core, the Agents API is designed to overcome the limitations of standard AI models, which are often great at understanding and generating language but struggle with performing actions, remembering past interactions consistently, or using external tools effectively. The Mistral Agents API tackles these challenges by equipping its powerful language models with features like built-in connectors to various tools, persistent memory across conversations, and the ability to coordinate complex tasks.

Think of it like upgrading from a very knowledgeable librarian who can only talk about books to a team of expert researchers who can not only access information but also conduct experiments, write reports, and collaborate with each other. This new API serves as the foundation for creating enterprise-grade AI applications that can automate workflows, assist with complex decision-making, and provide truly interactive experiences.

💡
Want a great API Testing tool that generates beautiful API Documentation?

Want an integrated, All-in-One platform for your Developer Team to work together with maximum productivity?

Apidog delivers all your demans, and replaces Postman at a much more affordable price!
button

What Makes Mistral Agents So Capable?

Traditional language models, while proficient at text generation, often fall short when it comes to executing actions or remembering information across extended interactions. The Mistral Agents API directly addresses these limitations by synergizing Mistral's cutting-edge language models with a suite of powerful features designed for agentic workflows.

Core Capabilities:

At its heart, the Agents API provides:

This API is not merely an extension of their Chat Completion API; it's a dedicated framework specifically engineered to simplify the implementation of agentic use cases. It's designed to be the backbone of enterprise-grade agentic platforms, enabling businesses to deploy AI in more practical, impactful, and action-oriented ways.

Mistral Agents in Action: Real-World Applications

The versatility of the Agents API is showcased through various innovative applications:

Memory, Context, and Stateful Conversations

A cornerstone of the Agents API is its robust conversation management system. It ensures that interactions are stateful, meaning context is retained over time. Developers can initiate conversations in two primary ways:

  1. With an Agent: By specifying an agent_id, you leverage the pre-configured capabilities, tools, and instructions of a specific agent.
  2. Direct Access: You can start a conversation by directly specifying the model and completion parameters, providing quick access to built-in connectors without a pre-defined agent.

Each conversation maintains a structured history through "conversation entries," ensuring context is meticulously preserved. This statefulness allows developers to view past conversations, continue any interaction seamlessly, or even branch off to initiate new conversational paths from any point in the history. Furthermore, the API supports streaming outputs, enabling real-time updates and dynamic interactions.

Agent Orchestration: The Power of Collaboration

The true differentiating power of the Agents API emerges in its ability to orchestrate multiple agents. This isn't about a single monolithic AI; it's about a symphony of specialized agents working in concert. Through dynamic orchestration, agents can be added or removed from a conversation as needed, each contributing its unique skills to tackle different facets of a complex problem.

To build an agentic workflow with handoffs:

  1. Create Agents: Define and create all necessary agents, each equipped with specific tools, models, and instructions tailored to their role.
  2. Define Handoffs: Specify which agents can delegate tasks to others. For example, a primary customer service agent might hand off a technical query to a specialized troubleshooting agent or a billing inquiry to a finance agent.

These handoffs enable a seamless chain of actions. A single user request can trigger a cascade of tasks across multiple agents, each autonomously handling its designated part. This collaborative approach unlocks unprecedented efficiency and effectiveness in problem-solving for sophisticated real-world applications.

Basic Usage of the Mistral Agents API

Having understood the capabilities of the Mistral Agents API, let's explore how to interact with it. The API introduces three new primary objects:

Notably, you can leverage many features, like stateful conversations and built-in connectors, without explicitly creating and referencing a formal "Agent" object first. This provides flexibility for simpler use cases.

Creating an Agent

To define a specialized agent, you make a request to the API specifying several parameters:

Here’s an example cURL request to create a simple agent:

curl --location "https://api.mistral.ai/v1/agents" \
     --header 'Content-Type: application/json' \
     --header 'Accept: application/json' \
     --header "Authorization: Bearer $MISTRAL_API_KEY" \
     --data '{
         "model": "mistral-medium-latest",
         "name": "Simple Agent",
         "description": "A simple Agent with persistent state."
     }'

Updating an Agent

Agents can be updated after creation. The arguments are the same as those for creation. This operation results in a new agent object with the updated settings, effectively allowing for versioning of your agents.

curl --location "https://api.mistral.ai/v1/agents/<agent_id>" \
     --header 'Content-Type: application/json' \
     --header 'Accept: application/json' \
     --header "Authorization: Bearer $MISTRAL_API_KEY" \
     --data '{
         "completion_args": {
           "temperature": 0.3,
           "top_p": 0.95
         },
         "description": "An edited simple agent."
     }'

Managing Conversations

Once an agent is created (or if you're using direct access), you can initiate conversations.

Starting a Conversation:
You need to provide:

Example (simple string input):

curl --location "https://api.mistral.ai/v1/conversations" \
     --header 'Content-Type: application/json' \
     --header 'Accept: application/json' \
     --header "Authorization: Bearer $MISTRAL_API_KEY" \
     --data '{
         "inputs": "Who is Albert Einstein?",
         "stream": false,
         "agent_id": "<agent_id>"
     }'

Continuing a Conversation:
To add to an existing conversation:

Example:

curl --location "https://api.mistral.ai/v1/conversations/<conv_id>" \
     --header 'Content-Type: application/json' \
     --header 'Accept: application/json' \
     --header "Authorization: Bearer $MISTRAL_API_KEY" \
     --data '{
         "inputs": "Translate to French.",
         "stream": false,
         "store": true,
         "handoff_execution": "server"
     }'

Streaming Output

For real-time interactions, both starting and continuing conversations can be streamed by setting stream: true and ensuring the Accept header is text/event-stream.

curl --location "https://api.mistral.ai/v1/conversations" \
     --header 'Content-Type: application/json' \
     --header 'Accept: text/event-stream' \
     --header "Authorization: Bearer $MISTRAL_API_KEY" \
     --data '{
         "inputs": "Who is Albert Einstein?",
         "stream": true,
         "agent_id": "ag_06811008e6e07cb48000fd3f133e1771"
     }'

When streaming, you'll receive various event types indicating the progress and content of the response, such as:

These basic operations form the foundation for building dynamic and interactive applications with Mistral agents.

Integrating Mistral Agents API with the Model Context Protocol (MCP)

While the built-in connectors offer significant power, the true extensibility of Mistral Agents shines when combined with the Model Context Protocol (MCP).

What is MCP?

The Model Context Protocol (MCP) is an open standard designed to streamline the integration of AI models with diverse external data sources, tools, and APIs. It provides a standardized, secure interface that allows AI systems to access and utilize real-world contextual information efficiently. Instead of building and maintaining numerous bespoke integrations, MCP offers a unified way for AI models to connect to live data and systems, leading to more relevant, accurate, and powerful responses. For detailed information, refer to the official Model Context Protocol documentation.

Mistral's Python SDK provides seamless integration mechanisms for connecting agents with MCP Clients. This allows your agents to interact with any service or data source that exposes an MCP interface, whether it's a local tool, a third-party API, or a proprietary enterprise system.

We'll explore three common scenarios for using MCP with Mistral Agents: a local MCP server, a remote MCP server without authentication, and a remote MCP server with authentication. All examples will utilize asynchronous Python code.

Scenario 1: Using a Local MCP Server

Imagine you have a local script or service (e.g., a custom weather information provider) that you want your Mistral agent to interact with.

Step 1: Initialize the Mistral Client and Setup
Import necessary modules from mistralai and mcp. This includes Mistral, RunContext, StdioServerParameters (for local process-based MCP servers), and MCPClientSTDIO.

import asyncio
import os
from pathlib import Path
from mistralai import Mistral
from mistralai.extra.run.context import RunContext
from mcp import StdioServerParameters
from mistralai.extra.mcp.stdio import MCPClientSTDIO
from mistralai.types import BaseModel

cwd = Path(__file__).parent
MODEL = "mistral-medium-latest" # Or your preferred model

async def main_local_mcp():
    api_key = os.environ["MISTRAL_API_KEY"]
    client = Mistral(api_key=api_key)

    # Define parameters for the local MCP server (e.g., running a Python script)
    server_params = StdioServerParameters(
        command="python",
        args=[str((cwd / "mcp_servers/stdio_server.py").resolve())], # Path to your MCP server script
        env=None,
    )

    # Create an agent
    weather_agent = client.beta.agents.create(
        model=MODEL,
        name="Local Weather Teller",
        instructions="You can tell the weather using a local MCP tool.",
        description="Fetches weather from a local source.",
    )

    # Define expected output format (optional, but good for structured data)
    class WeatherResult(BaseModel):
        user: str
        location: str
        temperature: float

    # Create a Run Context
    async with RunContext(
        agent_id=weather_agent.id,
        output_format=WeatherResult, # Optional: For structured output
        continue_on_fn_error=True,
    ) as run_ctx:
        # Create and register MCP client
        mcp_client = MCPClientSTDIO(stdio_params=server_params)
        await run_ctx.register_mcp_client(mcp_client=mcp_client)

        # Example of registering a local Python function as a tool
        import random
        @run_ctx.register_func
        def get_location(name: str) -> str:
            """Function to get a random location for a user."""
            return random.choice(["New York", "London", "Paris"])

        # Run the agent
        run_result = await client.beta.conversations.run_async(
            run_ctx=run_ctx,
            inputs="Tell me the weather in John's location currently.",
        )

        print("Local MCP - All run entries:")
        for entry in run_result.output_entries:
            print(f"{entry}\n")
        if run_result.output_as_model:
            print(f"Local MCP - Final model output: {run_result.output_as_model}")
        else:
            print(f"Local MCP - Final text output: {run_result.output_as_text}")

# if __name__ == "__main__":
#     asyncio.run(main_local_mcp())

In this setup, stdio_server.py would be your script implementing the MCP server logic, communicating over stdin/stdout. The RunContext manages the interaction, and register_mcp_client makes the local MCP server available as a tool to the agent. You can also register local Python functions directly using @run_ctx.register_func.

Streaming with a Local MCP Server:
To stream, use client.beta.conversations.run_stream_async and process events as they arrive:

    # Inside RunContext, after MCP client registration
    # events = await client.beta.conversations.run_stream_async(
    #     run_ctx=run_ctx,
    #     inputs="Tell me the weather in John's location currently, stream style.",
    # )
    # streamed_run_result = None
    # async for event in events:
    #     if isinstance(event, RunResult): # Assuming RunResult is defined or imported
    #         streamed_run_result = event
    #     else:
    #         print(f"Stream event: {event}")
    # if streamed_run_result:
    #     # Process streamed_run_result
    #     pass

Scenario 2: Using a Remote MCP Server Without Authentication

Many public or internal services might expose an MCP interface over HTTP/SSE without requiring authentication.

from mistralai.extra.mcp.sse import MCPClientSSE, SSEServerParams

async def main_remote_no_auth_mcp():
    api_key = os.environ["MISTRAL_API_KEY"]
    client = Mistral(api_key=api_key)

    # Define the URL for the remote MCP server (e.g., Semgrep's public MCP)
    server_url = "https://mcp.semgrep.ai/sse"
    mcp_client = MCPClientSSE(sse_params=SSEServerParams(url=server_url, timeout=100))

    async with RunContext(
        model=MODEL, # Can use agent_id too if an agent is pre-created
    ) as run_ctx:
        await run_ctx.register_mcp_client(mcp_client=mcp_client)

        run_result = await client.beta.conversations.run_async(
            run_ctx=run_ctx,
            inputs="Can you write a hello_world.py file and then check it for security vulnerabilities using available tools?",
        )

        print("Remote No-Auth MCP - All run entries:")
        for entry in run_result.output_entries:
            print(f"{entry}\n")
        print(f"Remote No-Auth MCP - Final Response: {run_result.output_as_text}")

# if __name__ == "__main__":
#     asyncio.run(main_remote_no_auth_mcp())

Here, MCPClientSSE is used with SSEServerParams pointing to the remote URL. The agent can then leverage tools provided by this remote MCP server. Streaming follows the same pattern as the local MCP example, using run_stream_async.

Scenario 3: Using a Remote MCP Server With Authentication (OAuth)

For services requiring OAuth2 authentication (like Linear, Jira, etc.), the process involves a few more steps to handle the authorization flow.

from http.server import BaseHTTPRequestHandler, HTTPServer
import threading
import webbrowser
from mistralai.extra.mcp.auth import build_oauth_params

CALLBACK_PORT = 16010 # Ensure this port is free

# Callback server setup (simplified from source)
def run_callback_server_util(callback_func, auth_response_dict):
    class OAuthCallbackHandler(BaseHTTPRequestHandler):
        def do_GET(self):
            if "/callback" in self.path or "/oauth/callback" in self.path: # More robust check
                auth_response_dict["url"] = self.path
                self.send_response(200)
                self.send_header("Content-type", "text/html")
                self.end_headers()
                self.wfile.write(b"<html><body>Authentication successful. You may close this window.</body></html>")
                callback_func() # Signal completion
                threading.Thread(target=self.server.shutdown).start()
            else:
                self.send_response(404)
                self.end_headers()

    server_address = ("localhost", CALLBACK_PORT)
    httpd = HTTPServer(server_address, OAuthCallbackHandler)
    threading.Thread(target=httpd.serve_forever, daemon=True).start() # Use daemon thread
    redirect_url = f"http://localhost:{CALLBACK_PORT}/oauth/callback"
    return httpd, redirect_url

async def main_remote_auth_mcp():
    api_key = os.environ["MISTRAL_API_KEY"]
    client = Mistral(api_key=api_key)

    server_url = "https://mcp.linear.app/sse" # Example: Linear MCP
    mcp_client_auth = MCPClientSSE(sse_params=SSEServerParams(url=server_url))

    callback_event = asyncio.Event()
    event_loop = asyncio.get_event_loop()
    auth_response_holder = {"url": ""}

    if await mcp_client_auth.requires_auth():
        httpd, redirect_url = run_callback_server_util(
            lambda: event_loop.call_soon_threadsafe(callback_event.set),
            auth_response_holder
        )
        try:
            oauth_params = await build_oauth_params(mcp_client_auth.base_url, redirect_url=redirect_url)
            mcp_client_auth.set_oauth_params(oauth_params=oauth_params)
            login_url, state = await mcp_client_auth.get_auth_url_and_state(redirect_url)

            print(f"Please go to this URL and authorize: {login_url}")
            webbrowser.open(login_url, new=2)
            await callback_event.wait() # Wait for OAuth callback

            token = await mcp_client_auth.get_token_from_auth_response(
                auth_response_holder["url"], redirect_url=redirect_url, state=state
            )
            mcp_client_auth.set_auth_token(token)
            print("Authentication successful.")
        except Exception as e:
            print(f"Error during authentication: {e}")
            return # Exit if auth fails
        finally:
            if 'httpd' in locals() and httpd:
                httpd.shutdown()
                httpd.server_close()
    
    async with RunContext(model=MODEL) as run_ctx: # Or agent_id
        await run_ctx.register_mcp_client(mcp_client=mcp_client_auth)

        run_result = await client.beta.conversations.run_async(
            run_ctx=run_ctx,
            inputs="Tell me which projects do I have in my Linear workspace?",
        )
        print(f"Remote Auth MCP - Final Response: {run_result.output_as_text}")

# if __name__ == "__main__":
#     asyncio.run(main_remote_auth_mcp())

This involves setting up a local HTTP server to catch the OAuth redirect, guiding the user through the provider's authorization page, exchanging the received code for an access token, and then configuring the MCPClientSSE with this token. Once authenticated, the agent can interact with the protected MCP service. Streaming again follows the established pattern.

Conclusion: The Future is Agentic and Interconnected

The Mistral Agents API, especially when augmented by the Model Context Protocol, offers a robust and flexible platform for building next-generation AI applications. By enabling agents to not only reason and communicate but also to interact with a vast ecosystem of tools, data sources, and services, developers can create truly intelligent systems capable of tackling complex, real-world problems. Whether you're automating intricate workflows, providing deeply contextualized assistance, or pioneering new forms of human-AI collaboration, the combination of Mistral Agents and MCP provides the foundational toolkit for this exciting future. As the MCP standard gains wider adoption, the potential for creating interconnected and highly capable AI agents will only continue to grow.

💡
Want a great API Testing tool that generates beautiful API Documentation?

Want an integrated, All-in-One platform for your Developer Team to work together with maximum productivity?

Apidog delivers all your demans, and replaces Postman at a much more affordable price!
button

Explore more

Why Are KYC APIs Essential for Modern Financial Compliance Success

Why Are KYC APIs Essential for Modern Financial Compliance Success

Discover why KYC APIs are transforming financial compliance. Learn about document verification, AML checks, biometric authentication, and implementation best practices.

16 July 2025

What is Async API and Why Should Every Developer Care About It

What is Async API and Why Should Every Developer Care About It

Discover what AsyncAPI is and why it's essential for modern event-driven applications. Learn about asynchronous API documentation, real-time messaging, and how AsyncAPI differs from REST APIs.

16 July 2025

Voxtral: Mistral AI's Open Source Whisper Alternative

Voxtral: Mistral AI's Open Source Whisper Alternative

For the past few years, OpenAI's Whisper has reigned as the undisputed champion of open-source speech recognition. It offered a level of accuracy that democratized automatic speech recognition (ASR) for developers, researchers, and hobbyists worldwide. It was a monumental leap forward, but the community has been eagerly awaiting the next step—a model that goes beyond mere transcription into the realm of true understanding. That wait is now over. Mistral AI has entered the ring with Voxtral, a ne

15 July 2025

Practice API Design-first in Apidog

Discover an easier way to build and use APIs