Apidog

All-in-one Collaborative API Development Platform

API Design

API Documentation

API Debugging

API Mock

API Automated Testing

Sign up for free
Home / Viewpoint / OpenAI API Pricing | Automated Cost Calculation

OpenAI API Pricing | Automated Cost Calculation

OpenAI API is currently one of the most widely used API in the AI ​​field. However, you need to pay to use its services. In this article, we will introduce the API prices for each model of OpenAI, and also introduce a method to automatically calculate the number of tokens when using OpenAI API!

OpenAI API is an API (Application Programming Interface) for services under the OpenAI brand, such as ChatGPT and DALL·E 3. With such powerful AI models, OpenAI APIs are becoming one of the most used APIs in their respective fields. However, they are not free to use.

💡
If you need to use or test OpenAI APIs, Apidog is the most efficient way to access OpenAI APIs using Apidog, an easy-to-use API development tool.

By referring to the guide introduced in the main text, you can use OpenAI API with Apidog and at the same time automatically calculate the number of tokens and cost consumed by it. So, quickly download Apidog for free by clicking on the button below! 👇 👇 👇
button

This article will provide a breakdown of each of OpenAI's API model pricing, as well as an automated method to calculate the number of tokens and its cost when using OpenAI API.

What is OpenAI API?

OpenAI API is the program API (Application Programming Interface) provided by OpenAI. With the OpenAI API, developers can use AI model services such as GPT API and DALLE 3 API through OpenAI API.

With the OpenAI API,  it allows developers to create apps using OpenAI's AI models such as ChatGPT and Dalle3, or use these AI models to create your own - all without having to access the actual webpage to gain its functionality.

Getting Into Details: OpenAI API Pricing

At the time of writing, there are seven recommended AI models that provide API services under OpenAI, which are:

  • GPT-4o
  • GPT-4 Turbo
  • GPT-4
  • GPT-3.5 Turbo
  • Assistants API
  • Fine-tuning models
  • Embedding models and basic models
  • Image generation models (DALL·E 3)
  • Audio models, and more.

GPT-4o

GPT-4o (“o” for “omni”) is the latest model version till May, 13th, 2024. It is also the fastest and most affordable flagship model. In addition, GPT-4o has the best vision and performance across non-English languages of any of our models. GPT-4o is available in the OpenAI API to paying customers.

GPT-4 Turbo Pricing

gpt-4 turbo pricing

GPT-4 Turbo comes in three models: gpt-4-0125-preview, gpt-4-1106-preview, and gpt-4-1106-vision-preview, all priced at $10.00 per 1M input tokens and 1M output tokens. The price is set at $30.00 per item.

Although GPT-4 Turbo is a high-performance natural language processing model, it can be seen that the usage fee may be higher, due to better performance.

GPT-4 Pricing

gpt 4 api pricing

There are two pricing options for the GPT-4 language model.

  1. gpt-4: This is the standard version of GPT-4. There is a fee of $30.00 per 1M tokens input and $60.00 per 1M tokens output.
  2. gpt-4-32k: This is an advanced version of GPT-4 with a longer context length. It is priced at $60.00 per 1M input token and $120.00 per 1M output token, double the price of standard gpt-4.

GPT-4's broad general and domain knowledge and ability to accurately solve difficult problems by following complex instructions in natural language is worthwhile noting, however, to get the higher performing gpt-4-32k, you will have to pay double the standard version.

GPT-3.5 Turbo Pricing

gpt 3.5 turbo pricing

GPT-3.5 Turbo model family has two members. Model gpt-3.5-turbo-0125 is the flagship model that supports 16K context windows and is optimized for interaction. Model gpt-3.5-turbo-instruct is an instructed model and only supports 4K context windows. The respective fees will be USD 0.5 and USD 1.5 for 1M input tokens, while USD 1.5 and USD 2 for 1M output tokens.

Assistants API Pricing

openai assistant api

Developers can use the Assistants API and tools to build their own AI assistant applications. The search feature incurs a file storage fee for each assistant, where the pricing is based on the token fee for the selected language model. The two sub-models, Code Interpreter and Retrieval, are priced at USD 0.03/session and USD 0.2/GB/assistant/day.

Fine-tuning Model Pricing

fine tuning api model pricing

When using fine-tuning models, users will only be charged for the use of the original model. The respective fees for the three models such as gpt-3.5-turbo, davinci-002, and babbage-002 will be 8 USD, 6 USD, and 0.41M per 1M training tokens, and 3 USD and 12 USD per 1M input tokens. USD and 1.6 USD, and for 1M output tokens, it will be 6 USD, 12 USD, and 1.6 USD.

Embedding and Base Models Pricing

embedding and base model price api

The embedding models are quite affordable, where text-embedding-3-small is $0.02 per 1M token, text-embedding-3-large is $0.13 per 1M token, and ada v2 is $0.10 per 1M token.

The base model fee is $2.00 per 1M token for davinci-002 and $0.40 for babbage-002.

Image generation model (DALL·E 3) price

DALL-E 3's standard quality 1024x1024 resolution costs $0.04 per image, and the same resolution in HD costs $0.08. DALL-E 2 is cheaper at lower resolutions: 1024x1024 at $0.02, 512x512 at $0.018, and 256x256 at $0.016.

Tabulated Summary For Each OpenAI API Model Pricing

MODEL NAME FEE PER INPUT 1M TOKEN FEE PER OUTPUT 1M TOKEN
GPT-4 Turbo $10.00 $30.00
GPT-4
- gpt-4 $30.00 $60.00
- gpt-4-32k $60.00 $120.00
GPT-3.5 Turbo $0.50 $1.50
Assistants API
- Code Interpreter $0.30 / session
- Retrieval $0.20 / GB / assistant / day
Fine-tuning model
- gpt-3.5-turbo $8.00 (training token), $3.00 (input token), $6.00 (output token)
- davinci-002 $6.00, $12.00, $12.00 $0.41, $1.60, $1.60
- babbage-002 $0.40, $12.00, $12.00 $0.40, $1.60, $1.60
Embedding model
- text-embedding-3-small $0.02
- text-embedding-3-large $0.13
- ada v2 $0.10
Base model
- davinci-002 $2.00
- babbage-002 $0.40
Image Generation model (DALL-E 3)
- DALL-E 3
- 1024x1024 $0.04
- HD 1024x1024 $0.08
- DALL-E 2
- 1024x1024 $0.02
- 512x512 $0.018
- 256x256 $0.016
Voice model
- Whisper $0.006 / minute (rounded up to the nearest second)
- TTS $15.00
- TTS HD $30.00

If you want to know the usage fees for all models, visit the official ChatGPT API website and check the OpenAI API price list.

Automated Cost Calculation Prerequisites With Apidog

In order to work more efficiently around APIs, we strongly recommend you to use Apidog.

Apidog is an all-in-one API development platform that supports API developers through an API's entire lifecycle. This means Apidog has got you covered with processes to tend APIs, starting from designing, all the way to testing and documentation.

apidog specifications
button

To create an automatic calculator for the cost of running the OpenAI API, we need a third-conversion library to accurately convert inputs and outputs to token values.

Also, we will be able to convert them to any currency, let's take JPY (Japanese Yen) as an example.

Tokens Count Conversion Library

This uses the Open AI GPT Token Counter library to convert input/output data into token counts during the API debugging process.

Examples of Node.js code:

const openaiTokenCounter = require('openai-gpt-token-counter');

const text = process.argv[2]; // Get the test content from command line arguments
const model = "gpt-4"; // Replace with the OpenAI model you want to use

const tokenCount = openaiTokenCounter.text(text, model);
const characterCount = text.length; // Calculate the number of characters

console.log(`${tokenCount}`);

You should then rename the Node.js script as gpt-tokens-counter.js, and place it in the external program directory of Apidog for calling.

Next, you will need to install OpenAI GPT Token Counter on your computer. To do so, you can use the following command in your terminal:

npm install openai-gpt-token-counter

Real-Time Exchange Rate API

After obtaining the tokens values for the input and output, it is necessary to estimate the cost in JPY by using a real-time exchange rate API. This article will call the Currencylayer API to get the real-time exchange rate. Sign up for an account and obtain an API Key.

Converting Input Values into Tokens Using Apidog

Input values can be understood as questions and prompts when provided by the user during the query of the AI application. To gain advantage of this, a custom script needs to be added in the Pre-Processors to extract the query parameter from the request body, followed by its conversion to token values.

first step to converting input into tokens

This is the sample code for adding the token value conversion script in the Pre-Processors section:

try {
  var jsonData = JSON.parse(pm.request.body.raw);
  var content = jsonData.messages[0].content; // obtains the content of messages
  var result_input_tokens_js = pm.execute('./gpt-tokens/gpt-tokens-counter.js',[content])
  console.log(content);
  pm.environment.set("RESULT_INPUT_TOKENS", result_input_tokens_js);
  console.log("Input Tokens count: " + pm.environment.get("RESULT_INPUT_TOKENS"));
} catch (e) {
    console.log(e);
}

After pressing Send, the calculated input values should be visible in the Apidog console section.

input converted to token display apidog

Convert Tokens into JPY Cost

After obtaining the value of Tokens consumed from the input, it is necessary to request a real-time exchange rate API to obtain a conversion factor. This factor is then multiplied by the Tokens value to calculate the actual cost in JPY. Add the following script to the pre-operation:

pm.sendRequest("http://apilayer.net/api/live?access_key=YOUR-API-KEY&currencies=JPY&source=USD&format=1", (err, res) => {
  if (err) {
    console.log(err);
  } else {
    const quotes = res.json().quotes;
    const rate = parseFloat(quotes.USDJPY).toFixed(3);
    pm.environment.set("USDJPY_RATE", rate); 
    var USDJPY_RATE = pm.environment.get("USDJPY_RATE");
    // Retrieve the RESULT_INPUT_TOKENS variable from the previous script
    var RESULT_INPUT_TOKENS = pm.environment.get("RESULT_INPUT_TOKENS");

    // Calculate the tokens exchange rate value
    const tokensExchangeRate = 0.03; // Price of 1000 tokens in USD (with GPT-4-8k context input pricing as reference)

    // Calculate the estimated price in JPY
    const JPYPrice = ((RESULT_INPUT_TOKENS / 1000) * tokensExchangeRate * USDJPY_RATE).toFixed(2);

    pm.environment.set("INPUT_PRICE", JPYPrice); 

    console.log("Estimated cost: " + "¥" + JPYPrice);
  }
});

Converting Output Values into Tokens Using Apidog

Apidog automatically parses the returned data as an SSE (Server-Sent Events) event when the content-type parameter in the response returned by the API contains something like text/event-stream.

Begin by going to the Post-Processors section in the API definition and add a custom script for extracting the response content and concatenation completion.

select custom script apidog
// Get the response text
const text = pm.response.text()
// Split the text into lines
var lines = text.split('\n');
// Create an empty array to store the "content" parameter
var contents = [];
// Iterate through each line
for (var i = 0; i < lines.length; i++) {
    const line = lines[i];
    // Skip lines that do not start with "data:"
    if (!line.startsWith('data:')) {
        continue;
    }
    // Try to parse the JSON data
    try {
        var data = JSON.parse(line.substring(5).trim());  // Remove the leading "data: "
        // Get the "content" parameter from the "choices" array and add it to the array
        contents.push(data.choices[0].delta.content);
    } catch (e) {
        // Ignore the current line if it is not valid JSON data
    }
}
// Join the "content" parameters using the join() method
var result = contents.join('');
// Display the result in the "Visualize" tab of the body
pm.visualizer.set(result);
// Print the result to the console
console.log(result);

After creating the request, you can retrieve the complete response content in the console!

Converting Tokens from Output Value Using Apidog

Once you have received the response content, it is necessary to convert it into the Tokens value. This is made possible with a third-party library.

Add the custom script in the post-processing operation so Apidog can call the external gpt-toejsn-counter.js script to obtain the Tokens value.

Use this page to see the specific code:

openai-gpt-token-counter
Count the number of OpenAI tokens in a string. Supports all OpenAI Text models (text-davinci-003, gpt-3.5-turbo, gpt-4). Latest version: 1.1.1, last published: 12 days ago. Start using openai-gpt-token-counter in your project by running `npm i openai-gpt-token-counter`. There are 3 other projects in…

With the numbers you obtain from the console, you can estimate how much it will cost!

// Get the response text
const text = pm.response.text()
// Split the text into lines
var lines = text.split('\n');
// Create an empty array to store the "content" parameter
var contents = [];
// Iterate through each line
for (var i = 0; i < lines.length; i++) {
    const line = lines[i];
    // Skip lines that do not start with "data:"
    if (!line.startsWith('data:')) {
        continue;
    }
    // Try to parse the JSON data
    try {
        var data = JSON.parse(line.substring(5).trim());  // Remove the leading "data: "
        // Get the "content" parameter from the "choices" array and add it to the array
        contents.push(data.choices[0].delta.content);
    } catch (e) {
        // Ignore the current line if it is not valid JSON data
    }
}
// Join the "content" parameters using the join() method
var result = contents.join('');
// Display the result in the "Visualize" tab of the body
pm.visualizer.set(result);
// Print the result to the console
console.log(result);

// Calculate the number of output tokens.
var RESULT_OUTPUT_TOKENS = pm.execute('./gpt-tokens/gpt-tokens-counter.js', [result])
pm.environment.set("RESULT_OUTPUT_TOKENS", RESULT_OUTPUT_TOKENS);

console.log("Output Tokens count: " + pm.environment.get("RESULT_OUTPUT_TOKENS")); 

Convert Output Tokens into JPY Cost

Similar to the cost calculation scheme mentioned in the previous section, the actual cost (JPY) is obtained by multiplying the Tokens value with the exchange rate.

Add the following script in the post-processing operation:

pm.sendRequest("http://apilayer.net/api/live?access_key=YOUR-API-KEY&currencies=JPY&source=USD&format=1", (err, res) => {
  if (err) {
    console.log(err);
  } else {
    const quotes = res.json().quotes;
    const rate = parseFloat(quotes.USDJPY).toFixed(3);
    pm.environment.set("USDJPY_RATE", rate); 
    var USDJPY_RATE = pm.environment.get("USDJPY_RATE");
    // Get the RESULT_OUTPUT_TOKENS variable from the previous postman script
    var RESULT_OUTPUT_TOKENS = pm.environment.get("RESULT_OUTPUT_TOKENS");

    // Calculate tokens exchange rate
    const tokensExchangeRate = 0.06; // USD price per 1000 tokens (based on GPT-4-8k context input pricing)

    // Calculate estimated price in JPY
    const JPYPrice = ((RESULT_OUTPUT_TOKENS / 1000) * tokensExchangeRate * USDJPY_RATE).toFixed(2);

    pm.environment.set("OUTPUT_PRICE", JPYPrice); 

    console.log("Output cost (JPY): " + JPYPrice + "円");
  }
});

Calculate the Total Cost in JPY

inally, add a custom script in the post-processing phase that can automatically calculate the total cost of inputs and outputs.

// Summing up input and output costs

const INPUTPrice = Number(pm.environment.get("INPUT_PRICE"));
// Get the input price variable and convert it to a number

const OUTPUTPrice = Number(pm.environment.get("OUTPUT_PRICE"));
// Get the output price variable and convert it to a number

console.log("Total cost: " + "¥" + (INPUTPrice + OUTPUTPrice));
// Print the total cost: the sum of the input price and output price.

Allowing to estimate the approximate cost of the current request during the process of debugging the API.

button

Work On OpenAI APIs With Apidog

As mentioned before, Apidog is a comprehensive API tool that provides API design, documentation, testing, and debugging all within a single application.

With countless other APIs like OpenAI's, you can now find and access these kinds of third-party API projects with Apidog's API Hub service.

api hub website
button

To access the Open API project on API hub, click this link below. You can gain access to all APIs provided by OpenAI!

README - Open AI(ChatGPT)
Open AI(ChatGPT)
https://apidog.com/apidoc/project-370474
try openai api apidog project

How to Call and Test OpenAI API Online

To test OpenAI's API, follow these instructions:

Step 1: Once you access the OpenAI API project page, select the API you want to use from the menu on the left and click the " Try it out " button on the right panel.

Step 2: To use the API, you need access privileges to OpenAI and you need to obtain an API key, so you need to enter OpenAI API_KEY here.

set value apihub api project

With Apidog, sensitive information such as API keys will never be stored on the cloud - it would instead be stored locally, so you can trust Apidog.

enter api key openai api apidog

Step 3: Press the Send button to send the request to OpenAI's server and receive a response.

If you like to experiment and customize API requests, you can press Run in Apidog to activate the API management tool.

try it out on apidog
button

Conclusion

If you want to implement OpenAI API for your application, you first need to understand what you are aiming to achieve with your API. Then, familiarize yourself with OpenAI's various APIs by reading and understanding them. Each available OpenAI AI model is not the same, offering different specifications and performance levels.

You can also easily access OpenAI APIs using Apidog, an easy-to-use API management tool. If you need to use or test OpenAI APIs, Apidog is the best choice, especially with the API Hub OpenAI project ready for users to learn and implement. This will be an efficient measure. By referring to the guide introduced in this article, you can use OpenAI API with Apidog and at the same time automatically calculate the number of tokens and cost consumed by it!

Join Apidog's Newsletter

Subscribe to stay updated and receive the latest viewpoints anytime.