Apidog

All-in-one Collaborative API Development Platform

API Design

API Documentation

API Debugging

API Mock

API Automated Testing

Sign up for free
Home / Tutorials / GPT-4o mini API: New Features, Pricing, Usage, and More

GPT-4o mini API: New Features, Pricing, Usage, and More

On June 18th, 2024, OpenAI introduced GPT-4o Mini, a cost-efficient and versatile AI model designed to make advanced AI capabilities accessible to a broader audience. This blog explores its features, benefits, pricing, and how to deploy the GPT-4o mini API faster.

On June 18th, 2024, OpenAI introduced the GPT-4o mini, a revolutionary advancement in cost-efficient artificial intelligence. This new model is designed to bring advanced AI capabilities to a broader audience by significantly reducing the cost of deployment while maintaining high performance and versatility. In this blog, we will delve into the key aspects of GPT-4o mini, including its features, advantages, pricing, and usage.

💡
Speed up your GPT-4o Mini API integration using Apidog, an all-in-one API development platform. With Apidog's pre-built code snippets and streamlined testing and management features, you can effortlessly incorporate the GPT-4o Mini API into your applications, reducing development time and effort.
button

What is GPT-4o mini?

GPT-4o mini is a cutting-edge artificial intelligence model developed by OpenAI, designed to provide robust performance at a significantly reduced cost compared to previous models. It is part of OpenAI's broader initiative to make advanced AI more accessible and affordable.

GPT-4o mini official announcement image

For more details, visit the OpenAI's official website.

What's New about GPT-4o mini API?

OpenAI has introduced GPT-4o mini, heralded as their most cost-efficient small model to date, making advanced AI capabilities more accessible and affordable than ever before. Here are the key innovations and features of GPT-4o mini:

1. Unprecedented Cost Efficiency:

GPT-4o mini is priced at an extremely competitive rate of 15 cents per million input tokens and 60 cents per million output tokens, making it an order of magnitude more affordable than previous models and over 60% cheaper than GPT-3.5 Turbo.

2. Superior Performance Metrics:

  • Textual Intelligence: GPT-4o mini scores 82% on the MMLU benchmark, outperforming previous models.
  • Reasoning and Coding Skills: It excels in tasks requiring mathematical reasoning and coding proficiency, with scores of 87.0% on the MGSM benchmark (math reasoning) and 87.2% on HumanEval (coding performance).
Model evaluation scores between different AI models developed by OpenAI

3. Versatility in Task Handling:

The model can manage a broad range of tasks efficiently, from chaining multiple model calls and handling large volumes of context to providing fast, real-time text responses for customer interactions.

4. Multimodal Support:

Currently, GPT-4o mini supports text and vision inputs in the API. It is set to expand to include text, image, video, and audio inputs and outputs in the future.

5. Extended Context Window:

With a context window of 128K tokens and support for up to 16K output tokens per request, GPT-4o mini is well-suited for tasks that involve extensive data input.

6. Enhanced Non-English Text Handling:

Thanks to an improved tokenizer shared with GPT-4o, the model is now more cost-effective in handling non-English text.

7. Advanced Safety Measures:

  • Built-in Safety: The model includes robust safety features from the pre-training phase through to post-training alignments, using reinforcement learning with human feedback (RLHF) to ensure reliable and accurate responses.
  • New Safety Techniques: GPT-4o mini is the first model to apply OpenAI's instruction hierarchy method, which enhances the model's defense against jailbreaks, prompt injections, and system prompt extractions, making it safer for large-scale applications.

8. Proven Practical Applications:

Trusted partners such as Ramp and Superhuman have already tested and found that GPT-4o mini significantly outperforms GPT-3.5 Turbo in practical tasks like extracting structured data and generating high-quality email responses.

9. Immediate Availability

GPT-4o mini is available through the Assistants API, Chat Completions API, and Batch API. It is accessible to Free, Plus, and Team users on ChatGPT starting today, with Enterprise users gaining access next week.

10. Reduction in AI Costs:

Remarkably, the cost per token for GPT-4o mini has dropped by 99% since the introduction of text-davinci-003 in 2022, underscoring OpenAI's commitment to driving down costs while enhancing model capabilities.

Where can the GPT-4o mini API be accessed?

The GPT-4o mini API can be accessed through several OpenAI API endpoints:

  1. Assistants API
  2. Chat Completions API
  3. Batch API

Additionally, GPT-4o mini can be used within ChatGPT, where it is accessible to Free, Plus, and Team users starting today(June 18th, 2024), with Enterprise users gaining access next week.

GPT-4o vs GPT-4O mini: What are the Difference?

OpenAI has introduced two remarkable models, GPT-4o and GPT-4o mini, as part of their ongoing mission to make advanced artificial intelligence more accessible and versatile. While both models are natively multimodal, designed to process a combination of text, audio, and video inputs, and generate text, audio, and image outputs, they serve different purposes and audiences:

1. Model Size and Cost

  • GPT-4o: This is a full-sized, powerful model designed to handle extensive multimodal tasks. Naturally, it comes with higher computational requirements and costs.
  • GPT-4o mini: A lightweight version that is significantly more cost-efficient. It offers similar capabilities at a fraction of the cost, making it accessible to a broader audience.

2. Performance and Speed

  • GPT-4o: With its larger architecture, GPT-4o excels in handling intricate, resource-intensive tasks with superior performance. It is the go-to model for tasks that demand maximum AI power.
  • GPT-4o mini: While being smaller and cheaper, GPT-4o mini still outperforms GPT-3.5 Turbo in accuracy. It's designed to offer fast performance, making it suitable for real-time applications.

3. Current API Capabilities

  • Both Models: Currently, the API supports text and image inputs with text outputs.
  • Future Support: For GPT-4o mini, additional modalities, including audio, will also be introduced, ensuring both models remain at the cutting edge of AI capabilities.

4. Application Versatility

  • GPT-4o: Best suited for comprehensive AI applications that require seamless processing of multimodal data. It's ideal for high-stakes environments where every detail matters.
  • GPT-4o mini: Perfect for a wide range of applications, especially where cost efficiency and quick deployment are crucial. It's a great choice for scaling AI-driven solutions across various sectors.

5. Practical Use Cases

  • GPT-4o: Due to its extensive capabilities, GPT-4o is designed for use cases that involve heavy data processing, complex reasoning, and multi-faceted interactions.
  • GPT-4o mini: While it supports similar functions, GPT-4o mini shines in scenarios where affordability and speed are prioritized, such as real-time customer support and streamlined data analysis.

GPT-4o mini Pricing

GPT-4o mini is designed to be a cost-efficient AI model, making advanced artificial intelligence accessible to a broad range of users. Here are the pricing details for GPT-4o mini:

  • Input Tokens: 15 cents ($0.15) per million input tokens.
  • Output Tokens: 60 cents ($0.60) per million output tokens.

This pricing structure makes GPT-4o mini significantly more affordable than previous models. For instance, it is over 60% cheaper than GPT-3.5 Turbo, and an order of magnitude more cost-effective than other frontier models.

To put this in perspective:

  • Input tokens represent the text you send to the model for processing.
  • Output tokens represent the text the model generates as a response.

Click GPT-4o mini Pricing to get more information.

Cost Comparison

  • GPT-3.5 Turbo: GPT-4o mini is priced more than 60% lower than GPT-3.5 Turbo.
  • Other Frontier Models: GPT-4o mini offers an order of magnitude savings compared to other high-end AI models.

Practical Example

For a typical application, the cost could be calculated as follows:

  • Example Query: If you send a query with 1,000 words (approximately 1,500 tokens) and receive a response with 500 words (approximately 750 tokens), the cost would be:
    • Input: ( 1,500 \text{tokens} \times \frac{15 \text{cents}}{1,000,000 \text{ tokens}} ) = $0.0000225
    • Output: ( 750 \text{tokens} \times \frac{60 \text{cents}}{1,000,000 \text{ tokens}} ) = $0.000045
    • Total Cost for Query: $0.0000675

This minimal cost demonstrates how GPT-4o mini allows for the efficient processing of large amounts of data at a fraction of the cost of previous models, making it highly scalable for various applications.

Deploy GPT-4O mini API Faster with Apidog

Managing and testing APIs is a critical aspect when using the GPT-4O mini API. Apidog, a leading API management and development tool, streamlines this process, making it more convenient and efficient.

What is Apidog?

Apidog is a comprehensive, all-in-one platform built for API design, documentation, debugging, mocking, and testing. To enhance user experience, Apidog features an API Hub that aggregates all popular APIs(for example, Twitter, Instagram, GitHub, Notion, Google, and of course, OpenAI), streamlining the discovery, management, and integration process.

Apidog An integrated platform for API design, debugging, development, mock, and testing
Discover all the APIs you need for your projects at Apidog’s API Hub, including Twitter API, Instagram API, GitHub REST API, Notion API, Google API, etc.

This centralized repository allows developers to find, access, and manage multiple APIs with ease, significantly simplifying their workflow and improving efficiency.

API Hub developed by Apidog

To implement GPT-4O mini API much faster, find OpenAI API documentation from Apidog’s API hub, and begin the test and deployment work immediately.

OpenAI API documentation created by Apidog

Prerequisite: Obtain an OpenAI API Key

To utilize the GPT-4O mini API, an OpenAI API key is required. Follow these steps to acquire your API key:

Step 1. Sign Up for an OpenAI Account:

Step 2. Generate Your OpenAI API Key:

  • Access the API Keys page on OpenAI, login, and click "Create new secret key" to generate a new API key. Record and securely store it as you won't be able to view it again.
Create new OpenAI API key at Apidog

Testing & Managing GPT-4O mini API with Apidog

Apidog simplifies the use of OpenAI APIs by providing a comprehensive OpenAI API project that includes all available endpoints. Currently, GPT-4O mini can be accessed through the Chat Completions API, Assistants API, and Batch API. Follow these steps to start using the GPT-4O mini API:

Step 1: Access the OpenAI API Project on Apidog:

Click "Run in Apidog" to import OpenAI project to the Apidog desktop
  • Once the project is imported into Apidog, select the Chat Completions API endpoint from the left menu.
Select "Create chat completion" at Apidog to test the endpoint
  • On the new request screen, enter the HTTP method and endpoint URL as per the ChatGPT specification.
  • In the "Body" tab, write your message to ChatGPT in JSON format. Make sure to specify the model as "GPT-4O mini" by including "model": "gpt-4o-mini".
Configure Openai model for endpoint testing

Step2: Authenticate and Send Request:

  • In the "Header" tab, add the Authorization parameter.
Add the authorization parameter in the header tap at Apidog
  • Enter your ChatGPT API key and click the "Send" button. This way, you will get the API response result to validate whether it is working.
Send GPT-4O mini API request with Apidog

Pro tip: Apidog allows you to store your OpenAI API key as an environment variable. This enables you to reference the API key easily in future requests without re-entering it.

Store API key at environment variable for later use

By leveraging Apidog, you can efficiently manage, test, and utilize the GPT-4O mini API, driving more seamless and effective API integration for your projects.

button

Summary

The introduction of GPT-4o Mini by OpenAI marks a significant milestone in the field of artificial intelligence. By offering advanced AI capabilities at a substantially reduced cost, GPT-4o Mini makes it possible for a wider audience to leverage its powerful features. Its superior performance, versatility, and affordability make it an ideal solution for various applications, from real-time customer support to complex data analysis. Testing and managing GPT-4o mini API with tools like Apidog further simplifies the management, testing, and deployment of this innovative API, ensuring seamless integration and efficient workflow in AI-driven projects.


Join Apidog's Newsletter

Subscribe to stay updated and receive the latest viewpoints anytime.