Apidog

All-in-one Collaborative API Development Platform

API Design

API Documentation

API Debugging

API Mocking

API Automated Testing

How to Run Mistral Small 3 via OpenRouter API: A Comprehensive Guide

This guide provides an in-depth look at Mistral Small 3, its performance benchmarks, and a step-by-step tutorial on utilizing it through the OpenRouter API.

Emmanuel Mumba

Emmanuel Mumba

Updated on January 31, 2025

In the rapidly evolving landscape of artificial intelligence, the demand for efficient and powerful language models has never been higher. Mistral Small 3 emerges as a noteworthy contender, offering a balance between performance and resource efficiency. When paired with OpenRouter, a unified API gateway, developers can seamlessly integrate Mistral Small 3 into their applications. This guide provides an in-depth look at Mistral Small 3, its performance benchmarks, and a step-by-step tutorial on utilizing it through the OpenRouter API.

💡
For developers seeking seamless API integration, Apidog offers a powerful platform that simplifies the process of working with models like Mistral Small 3. With its intuitive interface and robust features, Apidog can help streamline your workflow and enhance your application's performance, making it an ideal tool for integrating advanced language models efficiently.
button

Understanding Mistral Small 3

Mistral Small 3 is a language model developed to deliver high-quality text generation while maintaining efficiency. Its design focuses on providing robust performance without the extensive computational demands typically associated with larger models.

Key Features

  • Efficiency: Optimized for low latency, making it suitable for high-volume applications.
  • Versatility: Capable of handling tasks such as translation, summarization, and sentiment analysis.
  • Cost-Effective: Offers a balance between performance and resource utilization, making it accessible for various applications.
  • Scalability: Ideal for deployment in AI-driven businesses and applications where cost and response time are crucial.

Performance Benchmarks

Evaluating a language model's performance is crucial for understanding its capabilities. Below is a comparison of Mistral Small 3 with other prominent models across various benchmarks:

Mistral Small 3 stands out as a strong competitor to larger models like Llama 3.3 70B and Qwen 32B, offering an excellent open-source alternative to proprietary models such as GPT4o-mini. It matches the performance of Llama 3.3 70B in instruction-following tasks, while being over three times faster on the same hardware.

This pre-trained and instruction-tuned model is designed to handle the vast majority of generative AI tasks that require solid language comprehension and low-latency instruction following.

Mistral Small 3 has been optimized to deliver top-tier performance while remaining small enough for local deployment. With fewer layers than competing models, it significantly reduces the time per forward pass. Achieving over 81% accuracy on MMLU and a latency of 150 tokens per second, it stands as the most efficient model in its category.

Both pretrained and instruction-tuned checkpoints are available under Apache 2.0, offering a powerful base for accelerating progress. It’s worth noting that Mistral Small 3 has not been trained with reinforcement learning or synthetic data, placing it earlier in the model development pipeline than models like Deepseek R1, though it serves as a solid foundation for building reasoning capabilities. The open-source community is expected to adopt and customize the model for further advancements.

Performance / Human Evaluations

Instruct performance

The instruction-tuned model delivers performance that competes with open-weight models three times its size, as well as with the proprietary GPT4o-mini model, across benchmarks in Code, Math, General Knowledge, and Instruction Following.

Performance accuracy on all benchmarks were obtained through the same internal evaluation pipeline - as such, numbers may vary slightly from previously reported performance (Qwen2.5-32B-Instruct, Llama-3.3-70B-Instruct, Gemma-2-27B-IT). Judge based evals such as Wildbench, Arena hard and MTBench were based on gpt-4o-2024-05-13.

Pretraining performance

Mistral Small 3, a 24B model, delivers the best performance within its size class and competes with models three times larger, such as Llama 3.3 70B.

When to Use Mistral Small 3
Across various industries, several distinct use cases for pre-trained models of this size have emerged:

  • Fast-response conversational assistance: Mistral Small 3 excels in situations where quick, accurate responses are crucial. It is ideal for virtual assistants in environments that demand immediate feedback and near real-time interactions.
  • Low-latency function calling: The model efficiently handles rapid function execution, making it highly suitable for automated or agentic workflows.
  • Fine-tuning for subject matter expertise: Mistral Small 3 can be fine-tuned to specialize in specific fields, creating highly accurate subject matter experts. This is especially valuable in areas like legal advice, medical diagnostics, and technical support, where domain-specific knowledge is essential.
  • Local inference: Perfect for hobbyists and organizations managing sensitive or proprietary data, Mistral Small 3 can be run privately on a single RTX 4090 or a MacBook with 32GB RAM when quantized.

While Mistral Small 3 is more compact, it delivers competitive performance across these benchmarks, highlighting its efficiency and effectiveness.

Why Use OpenRouter API for Mistral Small 3?

OpenRouter serves as a unified API gateway, simplifying the integration of various language models into applications. By leveraging OpenRouter, developers can access Mistral Small 3 without the need for multiple API keys or complex configurations.

Benefits of OpenRouter API

  • Unified Access: Single API key to access multiple AI models.
  • Simplified Billing: Centralized payment system for various models.
  • Load Balancing: Ensures optimal request handling and reduced downtime.
  • Easy Integration: Simple API endpoints and standardized request formats.

Integrating Mistral Small 3 via OpenRouter API

Step 1: Setting Up Your OpenRouter Account

Registration:

  • After registration, verify your email address to activate your account.

Generating an API Key:

  • Click on "Create Key" and provide a descriptive name for easy reference.
  • Store this API key securely, as it will be used for authentication in your API requests.

Step 2: Installing Necessary Dependencies

To interact with the OpenRouter API, you'll need the requests library in Python. If it's not already installed, you can add it using the following command:

pip install requests

Step 3: Crafting Your API Request

With your API key ready and dependencies installed, you can construct a request to the OpenRouter API to utilize Mistral Small 3. Below is a detailed example:

import requests

# Your OpenRouter API key
API_KEY = "your_api_key_here"

# OpenRouter API endpoint
API_URL = "https://openrouter.ai/api/v1/chat/completions"

# Headers for the request
headers = {
    "Authorization": f"Bearer {API_KEY}",
    "Content-Type": "application/json"
}

# Payload for the request
payload = {
    "model": "mistral-small-3",
    "messages": [
        {"role": "user", "content": "Explain quantum computing in simple terms."}
    ],
    "temperature": 0.7
}

# Sending the request
response = requests.post(API_URL, headers=headers, json=payload)

# Parsing the response
if response.status_code == 200:
    response_data = response.json()
    assistant_message = response_data.get("choices", [])[0].get("message", {}).get("content", "")
    print("Assistant:", assistant_message)
else:
    print(f"Request failed with status code {response.status_code}: {response.text}")

Step 4: Handling API Responses

Upon a successful request, the API will return a JSON response containing the model's output. Here's an example of what the response might look like:

{
  "id": "chatcmpl-abc123",
  "object": "chat.completion",
  "created": 1700000000,
  "model": "mistral-small-3",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "Quantum computing is a type of computing that uses quantum bits (qubits)..."
      },
      "finish_reason": "stop"
    }
  ]
}

Additional API Request Examples

1. Summarization Task

payload["messages"][0]["content"] = "Summarize the benefits of renewable energy."
response = requests.post(API_URL, headers=headers, json=payload)
print(response.json())

2. Sentiment Analysis

payload["messages"][0]["content"] = "Analyze the sentiment of this review: 'The product was amazing and exceeded expectations!'"
response = requests.post(API_URL, headers=headers, json=payload)
print(response.json())

Best Practices for Using Mistral Small 3 with OpenRouter

  • Optimize Requests: Reduce API costs by batching requests or limiting response length.
  • Monitor Usage: Regularly check API usage limits to avoid unexpected costs.
  • Adjust Temperature: Control output randomness to fine-tune response generation.
  • Implement Error Handling: Ensure robust handling for failed requests or API downtimes.

Conclusion

Mistral Small 3, when used via OpenRouter, provides an efficient and scalable solution for AI-driven applications. Its competitive performance, cost-effectiveness, and ease of integration make it a valuable tool for developers. By following this guide, you can seamlessly integrate Mistral Small 3 into your projects and leverage its capabilities for various natural language processing tasks.

Whether you're building chatbots, enhancing customer support, or automating content generation, Mistral Small 3 offers a powerful and accessible solution through the OpenRouter API.

button

How to Clone a Voice Using MiniMax's T2A-01-HD APIViewpoint

How to Clone a Voice Using MiniMax's T2A-01-HD API

MiniMaxi API's fast voice cloning feature provides an efficient and straightforward way to replicate voices for various applications. By following a simple process, users can upload an audio file, clone the voice, and utilize it for speech synthesis.

Emmanuel Mumba

January 30, 2025

How to Deploy Deepseek R1 in the Cloud ?Viewpoint

How to Deploy Deepseek R1 in the Cloud ?

Learn how to deploy Deepseek R1 on AWS, Azure, and Digital Ocean step by step. Optimize performance and use Apidog for seamless API testing. Get started now!

Ashley Innocent

January 30, 2025

How to Use Qwen2.5-Max via APIViewpoint

How to Use Qwen2.5-Max via API

Learn how to use the Qwen2.5-Max API step-by-step, including setup, testing with Apidog, and best practices. Streamline your AI projects today!

Ashley Innocent

January 29, 2025