Apidog

All-in-one Collaborative API Development Platform

API Design

API Documentation

API Debugging

API Mocking

API Automated Testing

OpenAI GPT-3.5 Turbo and GPT-4 (Lower Pricing & New Model)

OpenAI announced a range of updates, including improved function calling capabilities, extended context windows, and lower prices.

David Demir

David Demir

Updated on November 29, 2024

On June 13, 2023, OpenAI announced a range of updates to its suite of language models, including enhanced steerability, improved function calling capabilities, extended context windows, and lower prices. In just six months, ChatGPT, a sibling model to InstructGPT which is trained to follow instructions and provide detailed responses, has quickly gained popularity worldwide since its launch on November 30, 2022. This update covers six major areas, and we're excited to dive into the details with you.

  • new function calling capability in the Chat Completions API
  • updated and more steerable versions of gpt-4 and gpt-3.5-turbo
  • new 16k context version of gpt-3.5-turbo (vs the standard 4k version)
  • 75% cost reduction on our state-of-the-art embeddings model
  • 25% cost reduction on input tokens for gpt-3.5-turbo
  • announcing the deprecation timeline for the gpt-3.5-turbo-0301 and gpt-4-0314 models

Function calling

The latest updates to GPT-4-0613 and GPT-3.5-turbo-0613 allow developers to describe functions to the models, which can then output a JSON object containing arguments to call those functions. This provides a new way for GPT models to connect with external tools and APIs to generate structured data output.

The models have been fine-tuned to detect when a function needs to be called and can respond with JSON that adheres to the function signature. This allows developers to create chatbots that answer questions by calling external tools, convert natural language into API calls or database queries, and extract structured data from text. The new API parameters support calling specific functions, and developers can refer to the developer documentation to add evals to improve function calling.

Function calling example

Step 1·OpenAI API

Call the model with functions and the user’s input

Sample request code:

curl https://api.openai.com/v1/chat/completions -u :$OPENAI_API_KEY -H 'Content-Type: application/json' -d '{
  "model": "gpt-3.5-turbo-0613",
  "messages": [
    {"role": "user", "content": "What is the weather like in Boston?"}
  ],
  "functions": [
    {
      "name": "get_current_weather",
      "description": "Get the current weather in a given location",
      "parameters": {
        "type": "object",
        "properties": {
          "location": {
            "type": "string",
            "description": "The city and state, e.g. San Francisco, CA"
          },
          "unit": {
            "type": "string",
            "enum": ["celsius", "fahrenheit"]
          }
        },
        "required": ["location"]
      }
    }
  ]
}'

You can use Apidog to send cURL requests. Apidog is a powerful API development and testing tool that offers a range of features and benefits to developers. Like Postman, Apidog allows users to send cURL requests, which is particularly useful for those already familiar with cURL in a terminal or command line interface. Leveraging Apidog's user-friendly interface and numerous functionalities, you can create and send cURL requests alongside other types of requests with great ease. This enables developers to efficiently test APIs, quickly identify and debug potential issues, and maximize their API development workflows.

Whether you are an experienced developer or just starting with API testing, Apidog's cURL request functionality is an essential tool to have at your disposal.

You need to replace $OPENAI_API_KEY with actual Key parameter when pasting the cURL request command.

After clicking the "send" button, you will receive a string of response characters.

Complete response:

{
  "id": "chatcmpl-123",
  ...
  "choices": [{
    "index": 0,
    "message": {
      "role": "assistant",
      "content": null,
      "function_call": {
        "name": "get_current_weather",
        "arguments": "{ \"location\": \"Boston, MA\"}"
      }
    },
    "finish_reason": "function_call"
  }]
}

Step 2·Third party API

Use the model response to call your API, request like below:

curl https://weatherapi.com/...
{ "temperature": 22, "unit": "celsius", "description": "Sunny" }

Step 3·OpenAI API

Send the response back to the model to summarize.

Sample request code:

curl https://api.openai.com/v1/chat/completions -u :$OPENAI_API_KEY -H 'Content-Type: application/json' -d '{
  "model": "gpt-3.5-turbo-0613",
  "messages": [
    {"role": "user", "content": "What is the weather like in Boston?"},
    {"role": "assistant", "content": null, "function_call": {"name": "get_current_weather", "arguments": "{ \"location\": \"Boston, MA\"}"}},
    {"role": "function", "name": "get_current_weather", "content": "{\"temperature\": "22", \"unit\": \"celsius\", \"description\": \"Sunny\"}"}
  ],
  "functions": [
    {
      "name": "get_current_weather",
      "description": "Get the current weather in a given location",
      "parameters": {
        "type": "object",
        "properties": {
          "location": {
            "type": "string",
            "description": "The city and state, e.g. San Francisco, CA"
          },
          "unit": {
            "type": "string",
            "enum": ["celsius", "fahrenheit"]
          }
        },
        "required": ["location"]
      }
    }
  ]
}'

you can use Apidog to send requests.

After clicking "send" button, you will get successful response.

Complete response:

{
    "id": "chatcmpl-******",
    "object": "chat.completion",
    "created": *****,
    "model": "gpt-3.5-turbo-0613",
    "choices": [
        {
            "index": 0,
            "message": {
                "role": "assistant",
                "content": "The weather in Boston is currently sunny with a temperature of 22 degrees Celsius."
            },
            "finish_reason": "stop"
        }
    ],
    "usage": {
        "prompt_tokens": 127,
        "completion_tokens": 17,
        "total_tokens": 144
    }
}

The weather in Boston is currently sunny with a temperature of 22 degrees Celsius.

Bigger Context Window

With the latest updates, GPT-4-32k-0613 and GPT-3.5-turbo-16k are now able to handle larger texts, which enhances their effectiveness for applications that require processing substantial amounts of text data. GPT-3.5-turbo-16k can now handle roughly 20 pages of text in a single request, which is four times larger than the previous model. In other words, users will soon be able to upload up to 20 pages of text into ChatGPT at once, providing a significant boost to its capabilities in dealing with larger text data.

New Models

OpenAI recently announced updates to their GPT-4 and GPT-3.5 Turbo models. GPT-4-0613 features an updated and improved model with function-calling capabilities. Meanwhile, GPT-4-32k-0613 has the same function-calling improvements as GPT-4-0613, along with an extended context length for better comprehension of larger texts.

The updates are set to enable more people to try GPT-4, and OpenAI is inviting many more from the waitlist over the coming weeks, with the intent to remove the waitlist entirely with this model. Similarly, GPT-3.5-Turbo-0613 has added function calling and more reliable steerability with the system message.

GPT-3.5-Turbo-16k features four times the context length of GPT-3.5-Turbo, at twice the price of 0.003 per 1K input tokens and0.003per1Kinputtokensand0.004 per 1K output tokens. This update means the model can now support ~20 pages of text in a single request.

OpenAI will upgrade and deprecate the initial models of GPT-4 and GPT-3.5 Turbo that were announced in March. Applications using the stable model names will automatically be upgraded to the new models on June 27th. Developers can also use the older models until September 13th by specifying the appropriate model names in their API requests. OpenAI welcomes feedback from developers to ensure a smooth transition.

In view of these updates, OpenAI's models are becoming more powerful and user-friendly, providing developers with exciting new capabilities to explore. From the improved function calling to the larger context length, these updates push the limits of natural language processing and put cutting-edge technology in the hands of developers worldwide.

Lower Pricing

OpenAI has recently announced a significant price reduction for its popular embeddings model, which has seen a 75% reduction in costs, down to $0.0001 per 1K tokens. This update is part of OpenAI's continuing efforts to make their systems more efficient and pass those savings on to developers.

The popular chat model GPT-3.5 Turbo that powers ChatGPT for millions of users has also received a price reduction, with a 25% decrease in input token costs. Developers can now use this model for just $0.0015 per 1K input tokens and $0.002 per 1K output tokens. The 16k context version, GPT-3.5-Turbo-16k, is priced at 0.003 per 1K input tokens and0.003per1Kinputtokensand0.004 per 1K output tokens, enabling developers to leverage this powerful model for larger text processing.

OpenAI values feedback from developers, and their suggestions are integral to the continued evolution of the platform. These latest updates are set to provide increased value and wider application opportunities for developers using OpenAI's models. With the lower pricing and other new features, OpenAI continues to be a leader in the natural language processing field.

GPT-4 model vs Chat Model (GPT-3.5-turbo)

GPT-3.5-turbo stands out with its lower cost, sufficient performance for general applications, and lower resource requirements, making it ideal for those with budget constraints or operating in resource-constrained environments. However, its limited context window and less powerful function calling might not be suitable for advanced applications.

On the other hand, GPT-4 offers enhanced function calling and a larger context window, which are beneficial for complex applications that require retaining extensive past information. However, it comes at a higher price point and requires greater computational resources. Choosing the right model for our needs can maximize benefits while staying within budget.

Model deprecation

A process of upgrading and deprecating the initial versions of gpt-4 and gpt-3.5-turbo has begun, which were announced in March. The stable model names, including gpt-3.5-turbo, gpt-4, and gpt-4-32k, will automatically be upgraded to new models on June 27th. To compare model performance between versions, our Evals library supports public and private evaluations to demonstrate how model changes will impact your use cases.

If developers require more time to transition to the new models, they can still use the older models by specifying gpt-3.5-turbo-0301, gpt-4-0314, or gpt-4-32k-0314 in their API request. The older models will be available until September 13th, after which requests for those model names will be unsuccessful. To keep updated on model deprecations, you can visit our model deprecation page. As this is the first update to these models, we would appreciate developer feedback to ensure a smooth transition.

Conclusion

OpenAI has released new models and features that will enable developers to build powerful applications. The reduced pricing is particularly appealing, as it allows for more experimentation with web apps while minimizing expenses. It will be interesting to see how other developers capitalize on these updates.

Additionally, the ChatGPT plugin is an AI-powered tool that can help you efficiently handle customer service, marketing, and other business tasks. This article will introduce several creative use cases to bring more value to your business.


Understanding the Root Causes of API DriftViewpoint

Understanding the Root Causes of API Drift

API drift is a silent killer of software reliability. Learn how tools like Apidog can help you design, test, and maintain APIs to prevent drift and improve developer experience.

Ashley Innocent

November 28, 2024

Top 5 AI Tools Every Developer Needs in 2024Viewpoint

Top 5 AI Tools Every Developer Needs in 2024

Discover the top 5 AI tools for developers in 2024, including Apidog, GitHub Copilot, Tabnine, and more. Boost productivity, reduce errors, and automate repetitive tasks. Optimize your API development with Apidog and other must-have AI tools. Download Apidog for free today!

Ashley Innocent

November 6, 2024

The Key Differences Between Test and Control in API Testing: A Complete GuideViewpoint

The Key Differences Between Test and Control in API Testing: A Complete Guide

Understand the key differences between test and control groups in API testing. Learn how tools like Apidog help you compare results and improve performance.

Ashley Innocent

November 6, 2024