Artificial Intelligence (AI) is rapidly moving beyond simply generating text or recognizing images. The next frontier is about AI that can take action, solve problems, and interact with the world in meaningful ways. Mistral AI, a prominent name in the field, has taken a significant step in this direction with its Mistral Agents API. This powerful toolkit allows developers to build sophisticated AI agents that can do much more than traditional language models.
At its core, the Agents API is designed to overcome the limitations of standard AI models, which are often great at understanding and generating language but struggle with performing actions, remembering past interactions consistently, or using external tools effectively. The Mistral Agents API tackles these challenges by equipping its powerful language models with features like built-in connectors to various tools, persistent memory across conversations, and the ability to coordinate complex tasks.
Think of it like upgrading from a very knowledgeable librarian who can only talk about books to a team of expert researchers who can not only access information but also conduct experiments, write reports, and collaborate with each other. This new API serves as the foundation for creating enterprise-grade AI applications that can automate workflows, assist with complex decision-making, and provide truly interactive experiences.
Want an integrated, All-in-One platform for your Developer Team to work together with maximum productivity?
Apidog delivers all your demans, and replaces Postman at a much more affordable price!
What Makes Mistral Agents So Capable?

Traditional language models, while proficient at text generation, often fall short when it comes to executing actions or remembering information across extended interactions. The Mistral Agents API directly addresses these limitations by synergizing Mistral's cutting-edge language models with a suite of powerful features designed for agentic workflows.
Core Capabilities:
At its heart, the Agents API provides:
- Built-in Connectors: These are pre-deployed tools that agents can call upon demand. They include:
- Code Execution: Allows agents to run Python code in a secure sandbox, useful for calculations, data analysis, and scientific computing.
- Web Search: Empowers agents with access to up-to-date information from the internet, significantly improving response accuracy and relevance. For instance, in the SimpleQA benchmark, Mistral Large with web search achieved a 75% score, a massive improvement over 23% without it.
- Image Generation: Leveraging models like Black Forest Lab FLUX1.1 [pro] Ultra, agents can create diverse images for applications ranging from educational aids to marketing graphics.
- Document Library: Enables agents to access and utilize documents from Mistral Cloud, powering integrated Retrieval Augmented Generation (RAG) to enhance their knowledge base.
- MCP Tools: Facilitates seamless integration with external systems via the Model Context Protocol, which we'll explore in depth in Part 3.
- Persistent Memory: Agents can maintain context across conversations, leading to more coherent and meaningful long-term interactions.
- Agentic Orchestration: The API allows for the coordination of multiple agents, each potentially specializing in different tasks, to collaboratively solve complex problems.
This API is not merely an extension of their Chat Completion API; it's a dedicated framework specifically engineered to simplify the implementation of agentic use cases. It's designed to be the backbone of enterprise-grade agentic platforms, enabling businesses to deploy AI in more practical, impactful, and action-oriented ways.
Mistral Agents in Action: Real-World Applications
The versatility of the Agents API is showcased through various innovative applications:
- Coding Assistant with GitHub: An agentic workflow where one agent oversees a developer agent (powered by DevStral) that interacts with GitHub, automating software development tasks with full repository authority.
- Linear Tickets Assistant: An intelligent assistant using a multi-server MCP architecture to transform call transcripts into Product Requirements Documents (PRDs), then into actionable Linear issues, and subsequently track project deliverables.
- Financial Analyst: An advisory agent orchestrating multiple MCP servers to source financial metrics, compile insights, and securely archive results, demonstrating complex data aggregation and analysis.
- Travel Assistant: A comprehensive AI tool to help users plan trips, book accommodations, and manage various travel-related needs.
- Nutrition Assistant: An AI-powered diet companion that helps users set goals, log meals, receive personalized food suggestions, track daily progress, and find restaurants aligning with their nutritional targets.
Memory, Context, and Stateful Conversations
A cornerstone of the Agents API is its robust conversation management system. It ensures that interactions are stateful, meaning context is retained over time. Developers can initiate conversations in two primary ways:
- With an Agent: By specifying an
agent_id
, you leverage the pre-configured capabilities, tools, and instructions of a specific agent. - Direct Access: You can start a conversation by directly specifying the model and completion parameters, providing quick access to built-in connectors without a pre-defined agent.
Each conversation maintains a structured history through "conversation entries," ensuring context is meticulously preserved. This statefulness allows developers to view past conversations, continue any interaction seamlessly, or even branch off to initiate new conversational paths from any point in the history. Furthermore, the API supports streaming outputs, enabling real-time updates and dynamic interactions.
Agent Orchestration: The Power of Collaboration
The true differentiating power of the Agents API emerges in its ability to orchestrate multiple agents. This isn't about a single monolithic AI; it's about a symphony of specialized agents working in concert. Through dynamic orchestration, agents can be added or removed from a conversation as needed, each contributing its unique skills to tackle different facets of a complex problem.
To build an agentic workflow with handoffs:
- Create Agents: Define and create all necessary agents, each equipped with specific tools, models, and instructions tailored to their role.
- Define Handoffs: Specify which agents can delegate tasks to others. For example, a primary customer service agent might hand off a technical query to a specialized troubleshooting agent or a billing inquiry to a finance agent.
These handoffs enable a seamless chain of actions. A single user request can trigger a cascade of tasks across multiple agents, each autonomously handling its designated part. This collaborative approach unlocks unprecedented efficiency and effectiveness in problem-solving for sophisticated real-world applications.
Basic Usage of the Mistral Agents API
Having understood the capabilities of the Mistral Agents API, let's explore how to interact with it. The API introduces three new primary objects:
- Agents: These are configurations that augment a model's abilities. An agent definition includes pre-selected values like the model to use, tools it can access, system instructions (prompts), and default completion parameters.
- Conversation: This object represents the history of interactions and past events with an assistant. It includes user messages, assistant replies, and records of tool executions.
- Entry: An entry is an individual action or event within a conversation, created by either the user or an assistant. It offers a flexible and expressive way to represent interactions, allowing for finer control over describing events.
Notably, you can leverage many features, like stateful conversations and built-in connectors, without explicitly creating and referencing a formal "Agent" object first. This provides flexibility for simpler use cases.
Creating an Agent
To define a specialized agent, you make a request to the API specifying several parameters:
model
: The underlying Mistral model (e.g.,mistral-medium-latest
).name
: A descriptive name for your agent.description
: A brief explanation of the agent's purpose or the task it's designed to accomplish.instructions
(optional): The system prompt that guides the agent's behavior and responses.tools
(optional): A list of tools the agent can use. Tool types include:function
: User-defined tools, similar to standard function calling in chat completions.web_search
/web_search_premium
: Built-in web search tools.code_interpreter
: The built-in tool for code execution.image_generation
: The built-in tool for generating images.completion_args
(optional): Standard chat completion sampler arguments like temperature, top_p, etc.
Here’s an example cURL request to create a simple agent:
curl --location "https://api.mistral.ai/v1/agents" \
--header 'Content-Type: application/json' \
--header 'Accept: application/json' \
--header "Authorization: Bearer $MISTRAL_API_KEY" \
--data '{
"model": "mistral-medium-latest",
"name": "Simple Agent",
"description": "A simple Agent with persistent state."
}'
Updating an Agent
Agents can be updated after creation. The arguments are the same as those for creation. This operation results in a new agent object with the updated settings, effectively allowing for versioning of your agents.
curl --location "https://api.mistral.ai/v1/agents/<agent_id>" \
--header 'Content-Type: application/json' \
--header 'Accept: application/json' \
--header "Authorization: Bearer $MISTRAL_API_KEY" \
--data '{
"completion_args": {
"temperature": 0.3,
"top_p": 0.95
},
"description": "An edited simple agent."
}'
Managing Conversations
Once an agent is created (or if you're using direct access), you can initiate conversations.
Starting a Conversation:
You need to provide:
agent_id
: The ID of the agent (if using a pre-defined agent).inputs
: The initial message, which can be a simple string or a list of message objects.
This request returns aconversation_id
.
Example (simple string input):
curl --location "https://api.mistral.ai/v1/conversations" \
--header 'Content-Type: application/json' \
--header 'Accept: application/json' \
--header "Authorization: Bearer $MISTRAL_API_KEY" \
--data '{
"inputs": "Who is Albert Einstein?",
"stream": false,
"agent_id": "<agent_id>"
}'
Continuing a Conversation:
To add to an existing conversation:
conversation_id
: The ID of the conversation to continue.inputs
: The next message or reply (string or list of messages).
Each continuation provides a newconversation_id
if the state is stored. You can opt out of cloud storage by settingstore=False
. Thehandoff_execution
parameter controls how agent handoffs are managed:server
(default, handled by Mistral's cloud) orclient
(response is returned to the user to manage the handoff).
Example:
curl --location "https://api.mistral.ai/v1/conversations/<conv_id>" \
--header 'Content-Type: application/json' \
--header 'Accept: application/json' \
--header "Authorization: Bearer $MISTRAL_API_KEY" \
--data '{
"inputs": "Translate to French.",
"stream": false,
"store": true,
"handoff_execution": "server"
}'
Streaming Output
For real-time interactions, both starting and continuing conversations can be streamed by setting stream: true
and ensuring the Accept
header is text/event-stream
.
curl --location "https://api.mistral.ai/v1/conversations" \
--header 'Content-Type: application/json' \
--header 'Accept: text/event-stream' \
--header "Authorization: Bearer $MISTRAL_API_KEY" \
--data '{
"inputs": "Who is Albert Einstein?",
"stream": true,
"agent_id": "ag_06811008e6e07cb48000fd3f133e1771"
}'
When streaming, you'll receive various event types indicating the progress and content of the response, such as:
conversation.response.started
: Marks the beginning of the conversation response.message.output.delta
: A chunk of content (tokens) for the model's reply.tool.execution.started
/tool.execution.done
: Indicate the lifecycle of a tool execution.agent.handoff.started
/agent.handoff.done
: Signal the start and end of an agent handoff.
These basic operations form the foundation for building dynamic and interactive applications with Mistral agents.
Integrating Mistral Agents API with the Model Context Protocol (MCP)
While the built-in connectors offer significant power, the true extensibility of Mistral Agents shines when combined with the Model Context Protocol (MCP).
What is MCP?
The Model Context Protocol (MCP) is an open standard designed to streamline the integration of AI models with diverse external data sources, tools, and APIs. It provides a standardized, secure interface that allows AI systems to access and utilize real-world contextual information efficiently. Instead of building and maintaining numerous bespoke integrations, MCP offers a unified way for AI models to connect to live data and systems, leading to more relevant, accurate, and powerful responses. For detailed information, refer to the official Model Context Protocol documentation.
Mistral's Python SDK provides seamless integration mechanisms for connecting agents with MCP Clients. This allows your agents to interact with any service or data source that exposes an MCP interface, whether it's a local tool, a third-party API, or a proprietary enterprise system.

We'll explore three common scenarios for using MCP with Mistral Agents: a local MCP server, a remote MCP server without authentication, and a remote MCP server with authentication. All examples will utilize asynchronous Python code.
Scenario 1: Using a Local MCP Server
Imagine you have a local script or service (e.g., a custom weather information provider) that you want your Mistral agent to interact with.
Step 1: Initialize the Mistral Client and Setup
Import necessary modules from mistralai
and mcp
. This includes Mistral
, RunContext
, StdioServerParameters
(for local process-based MCP servers), and MCPClientSTDIO
.
import asyncio
import os
from pathlib import Path
from mistralai import Mistral
from mistralai.extra.run.context import RunContext
from mcp import StdioServerParameters
from mistralai.extra.mcp.stdio import MCPClientSTDIO
from mistralai.types import BaseModel
cwd = Path(__file__).parent
MODEL = "mistral-medium-latest" # Or your preferred model
async def main_local_mcp():
api_key = os.environ["MISTRAL_API_KEY"]
client = Mistral(api_key=api_key)
# Define parameters for the local MCP server (e.g., running a Python script)
server_params = StdioServerParameters(
command="python",
args=[str((cwd / "mcp_servers/stdio_server.py").resolve())], # Path to your MCP server script
env=None,
)
# Create an agent
weather_agent = client.beta.agents.create(
model=MODEL,
name="Local Weather Teller",
instructions="You can tell the weather using a local MCP tool.",
description="Fetches weather from a local source.",
)
# Define expected output format (optional, but good for structured data)
class WeatherResult(BaseModel):
user: str
location: str
temperature: float
# Create a Run Context
async with RunContext(
agent_id=weather_agent.id,
output_format=WeatherResult, # Optional: For structured output
continue_on_fn_error=True,
) as run_ctx:
# Create and register MCP client
mcp_client = MCPClientSTDIO(stdio_params=server_params)
await run_ctx.register_mcp_client(mcp_client=mcp_client)
# Example of registering a local Python function as a tool
import random
@run_ctx.register_func
def get_location(name: str) -> str:
"""Function to get a random location for a user."""
return random.choice(["New York", "London", "Paris"])
# Run the agent
run_result = await client.beta.conversations.run_async(
run_ctx=run_ctx,
inputs="Tell me the weather in John's location currently.",
)
print("Local MCP - All run entries:")
for entry in run_result.output_entries:
print(f"{entry}\n")
if run_result.output_as_model:
print(f"Local MCP - Final model output: {run_result.output_as_model}")
else:
print(f"Local MCP - Final text output: {run_result.output_as_text}")
# if __name__ == "__main__":
# asyncio.run(main_local_mcp())
In this setup, stdio_server.py
would be your script implementing the MCP server logic, communicating over stdin/stdout. The RunContext
manages the interaction, and register_mcp_client
makes the local MCP server available as a tool to the agent. You can also register local Python functions directly using @run_ctx.register_func
.
Streaming with a Local MCP Server:
To stream, use client.beta.conversations.run_stream_async
and process events as they arrive:
# Inside RunContext, after MCP client registration
# events = await client.beta.conversations.run_stream_async(
# run_ctx=run_ctx,
# inputs="Tell me the weather in John's location currently, stream style.",
# )
# streamed_run_result = None
# async for event in events:
# if isinstance(event, RunResult): # Assuming RunResult is defined or imported
# streamed_run_result = event
# else:
# print(f"Stream event: {event}")
# if streamed_run_result:
# # Process streamed_run_result
# pass
Scenario 2: Using a Remote MCP Server Without Authentication
Many public or internal services might expose an MCP interface over HTTP/SSE without requiring authentication.
from mistralai.extra.mcp.sse import MCPClientSSE, SSEServerParams
async def main_remote_no_auth_mcp():
api_key = os.environ["MISTRAL_API_KEY"]
client = Mistral(api_key=api_key)
# Define the URL for the remote MCP server (e.g., Semgrep's public MCP)
server_url = "https://mcp.semgrep.ai/sse"
mcp_client = MCPClientSSE(sse_params=SSEServerParams(url=server_url, timeout=100))
async with RunContext(
model=MODEL, # Can use agent_id too if an agent is pre-created
) as run_ctx:
await run_ctx.register_mcp_client(mcp_client=mcp_client)
run_result = await client.beta.conversations.run_async(
run_ctx=run_ctx,
inputs="Can you write a hello_world.py file and then check it for security vulnerabilities using available tools?",
)
print("Remote No-Auth MCP - All run entries:")
for entry in run_result.output_entries:
print(f"{entry}\n")
print(f"Remote No-Auth MCP - Final Response: {run_result.output_as_text}")
# if __name__ == "__main__":
# asyncio.run(main_remote_no_auth_mcp())
Here, MCPClientSSE
is used with SSEServerParams
pointing to the remote URL. The agent can then leverage tools provided by this remote MCP server. Streaming follows the same pattern as the local MCP example, using run_stream_async
.
Scenario 3: Using a Remote MCP Server With Authentication (OAuth)
For services requiring OAuth2 authentication (like Linear, Jira, etc.), the process involves a few more steps to handle the authorization flow.
from http.server import BaseHTTPRequestHandler, HTTPServer
import threading
import webbrowser
from mistralai.extra.mcp.auth import build_oauth_params
CALLBACK_PORT = 16010 # Ensure this port is free
# Callback server setup (simplified from source)
def run_callback_server_util(callback_func, auth_response_dict):
class OAuthCallbackHandler(BaseHTTPRequestHandler):
def do_GET(self):
if "/callback" in self.path or "/oauth/callback" in self.path: # More robust check
auth_response_dict["url"] = self.path
self.send_response(200)
self.send_header("Content-type", "text/html")
self.end_headers()
self.wfile.write(b"<html><body>Authentication successful. You may close this window.</body></html>")
callback_func() # Signal completion
threading.Thread(target=self.server.shutdown).start()
else:
self.send_response(404)
self.end_headers()
server_address = ("localhost", CALLBACK_PORT)
httpd = HTTPServer(server_address, OAuthCallbackHandler)
threading.Thread(target=httpd.serve_forever, daemon=True).start() # Use daemon thread
redirect_url = f"http://localhost:{CALLBACK_PORT}/oauth/callback"
return httpd, redirect_url
async def main_remote_auth_mcp():
api_key = os.environ["MISTRAL_API_KEY"]
client = Mistral(api_key=api_key)
server_url = "https://mcp.linear.app/sse" # Example: Linear MCP
mcp_client_auth = MCPClientSSE(sse_params=SSEServerParams(url=server_url))
callback_event = asyncio.Event()
event_loop = asyncio.get_event_loop()
auth_response_holder = {"url": ""}
if await mcp_client_auth.requires_auth():
httpd, redirect_url = run_callback_server_util(
lambda: event_loop.call_soon_threadsafe(callback_event.set),
auth_response_holder
)
try:
oauth_params = await build_oauth_params(mcp_client_auth.base_url, redirect_url=redirect_url)
mcp_client_auth.set_oauth_params(oauth_params=oauth_params)
login_url, state = await mcp_client_auth.get_auth_url_and_state(redirect_url)
print(f"Please go to this URL and authorize: {login_url}")
webbrowser.open(login_url, new=2)
await callback_event.wait() # Wait for OAuth callback
token = await mcp_client_auth.get_token_from_auth_response(
auth_response_holder["url"], redirect_url=redirect_url, state=state
)
mcp_client_auth.set_auth_token(token)
print("Authentication successful.")
except Exception as e:
print(f"Error during authentication: {e}")
return # Exit if auth fails
finally:
if 'httpd' in locals() and httpd:
httpd.shutdown()
httpd.server_close()
async with RunContext(model=MODEL) as run_ctx: # Or agent_id
await run_ctx.register_mcp_client(mcp_client=mcp_client_auth)
run_result = await client.beta.conversations.run_async(
run_ctx=run_ctx,
inputs="Tell me which projects do I have in my Linear workspace?",
)
print(f"Remote Auth MCP - Final Response: {run_result.output_as_text}")
# if __name__ == "__main__":
# asyncio.run(main_remote_auth_mcp())
This involves setting up a local HTTP server to catch the OAuth redirect, guiding the user through the provider's authorization page, exchanging the received code for an access token, and then configuring the MCPClientSSE
with this token. Once authenticated, the agent can interact with the protected MCP service. Streaming again follows the established pattern.
Conclusion: The Future is Agentic and Interconnected
The Mistral Agents API, especially when augmented by the Model Context Protocol, offers a robust and flexible platform for building next-generation AI applications. By enabling agents to not only reason and communicate but also to interact with a vast ecosystem of tools, data sources, and services, developers can create truly intelligent systems capable of tackling complex, real-world problems. Whether you're automating intricate workflows, providing deeply contextualized assistance, or pioneering new forms of human-AI collaboration, the combination of Mistral Agents and MCP provides the foundational toolkit for this exciting future. As the MCP standard gains wider adoption, the potential for creating interconnected and highly capable AI agents will only continue to grow.
Want an integrated, All-in-One platform for your Developer Team to work together with maximum productivity?
Apidog delivers all your demans, and replaces Postman at a much more affordable price!