Apidog

All-in-one Collaborative API Development Platform

API Design

API Documentation

API Debugging

API Mocking

API Automated Testing

A Developer’s Guide to Testing LLM AI APIs with SSE

Discover how Apidog's upgraded SSE debugging enhances real-time interaction with AI models. Streamline testing, visualize AI’s thought process, and customize response handling for AI APIs, ensuring an efficient and transparent debugging process.

Oliver Kingsley

Oliver Kingsley

Updated on February 20, 2025

As AI technology continues to evolve, the ability to interact with large language models (LLMs) in real time has become essential for developers and teams working with AI-driven APIs. Models like OpenAI, Gemini, and Claude support streaming output, allowing users to see AI responses as they’re generated. This eliminates long wait times and allows for more dynamic, efficient interactions.

Streaming output typically uses the SSE (Server-Sent Events) format, which ensures continuous response delivery. This approach provides a more interactive way to engage with AI models in real time. Apidog, a leading API development tool, has been at the forefront of supporting SSE debugging. With the growing use of AI APIs, Apidog has enhanced its SSE debugging capabilities to better serve AI API endpoints, enabling developers to view AI responses as they’re generated. This advancement offers significant improvements, especially when working with complex models.

In this article, we'll explore how Apidog's enhanced SSE debugging feature can revolutionize the way developers test and interact with AI APIs.

💡
Pro Tip: To experience the full potential of the Apidog SSE feature, make sure your Apidog is updated to the latest version (≥2.6.49) and start exploring the new capabilities today.
button

Three Steps to Test LLM APIs with Apidog SSE Debugging

Apidog’s upgraded SSE debugging feature allows developers to see AI model responses as they are streamed in real time. It also automatically merges fragmented data into clear, readable text, making it easier to understand the AI's thought process—especially when dealing with complex models like DeepSeek R1.

Debugging SSE using Apidog

Here’s how you can get started with this powerful feature:

Step 1: Create an HTTP Request

Ensure that your Apidog version is 2.6.49 or newer.

Start by opening Apidog and creating a new HTTP project. Add a new endpoint for any AI models you want to test, and configure the API keys within the request settings.

Add a new endpoint using Apidog

For example, to interact with DeepSeek’s API, you can copy the following cURL request into the endpoint path field.

Note: The stream field must be set to true to enable SSE responses.

curl https://api.deepseek.com/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer {{API_KEY}}" \
  -d '{
        "model": "deepseek-chat",
          "messages": [
            {"role": "user", "content": "Write a Python code to sum the numbers from
1 to 100."},
          ], 
        "stream": true
      }'
endpoint path column

Apidog will automatically populate the necessary settings.

Paste the cURL to the path column to create new endpoint
How to Use the Deepseek API (R1 & V3): A Step-by-Step Guide with Screenshots
After logging into the Deepseek Open Platform, create an API key and save it in a secure location. By integrating Deepseek API with Apidog, you can quickly complete API debugging.

Step 2: Send the Request

Upon sending the request, Apidog automatically checks the response’s Content-Type. If it contains text/event-stream, Apidog will parse the response as SSE events and stream the output accordingly, allowing you to see the data unfold in real time.

Viewing SSE timeline at Apidog

Step 3: View Real-Time Responses

Apidog’s Timeline view displays the streaming response content as it’s received. The system automatically merges fragmented data into readable text, presenting it in the response panel as the AI processes and generates it.

Merging SSE into a readable reply

Customizing SSE Debugging Rules in Apidog

In some cases, Apidog’s built-in Auto-Merge feature may not work as expected, especially when handling custom AI models or non-standard response formats. To address this, Apidog allows you to customize how responses are processed using JSONPath Extraction Rules or Post-Processor Scripts

Configuring JSONPath Extraction Rules

When an SSE response is in JSON format but doesn’t follow the default recognition rules (like those for OpenAI, Claude, or Gemini), you can set up JSONPath to extract the data you need.

For example, if you receive the following raw SSE response:

data: {"choices":[{"index":0,"message":{"role":"assistant","content":"H"},"logprobs":null,"finish_reason":"stop"}]}

data: {"choices":[{"index":0,"message":{"role":"assistant","content":"i"},"logprobs":null,"finish_reason":"stop"}]}

To extract the content of the message.content field, you would configure JSONPath like this:

$.choices[0].message.content

This will pull the content: Hi. With JSONPath, you have complete control over how Apidog handles and extracts data from your responses.

Using Post-Processor Scripts for Non-JSON SSE

For responses that aren’t in JSON format, such as plain text or XML, Apidog gives you the option to write Post-Processor Scripts. These scripts let you process and extract data from SSE streams, giving you the flexibility to handle any data format that doesn’t match traditional JSON structures.

If you're working with a response format that isn't supported, you can also contact Apidog’s technical support team to request built-in support for the specific format.

With these customization options, Apidog ensures that you can tailor the debugging experience to suit your unique API testing needs.

Key Benefits of Apidog’s SSE Debugging for AI Models

Apidog’s innovative SSE debugging functionality brings several advantages to developers working with AI APIs. Let’s explore some of the key benefits:

  • Real-Time Response Viewing: The ability to see responses unfold in real-time improves the efficiency of debugging, saving time spent waiting for full API responses.
  • Automatic Merging of Responses: Apidog automatically merges streaming fragments into readable text for compatible AI models, such as those following the OpenAI, Gemini, or Claude formats.
  • Thought Process Visualization: For inference models like DeepSeek R1, Apidog even displays the model’s reasoning process in real-time. This provides a more transparent view of how the model generates its responses, helping developers fine-tune and improve the interaction.
visualize the thought process of the AI model
  • Customizable Merging Rules: Apidog provides flexibility by allowing developers to define their own merging rules when the automatic merging feature fails. This ensures a more tailored solution, accommodating various response formats.
SSE debugging - Apidog Docs
SSE debugging - Apidog Docs

Why Apidog’s SSE Debugging is a Game-Changer for AI Development

With the rise of AI-driven applications, developers need tools that can handle complex, real-time data interactions. Apidog’s SSE debugging feature is a game-changer because it:

  • Streamlines AI model testing: Real-time visualization of responses and thought processes makes it easier for developers to test and refine AI model interactions.
  • Boosts efficiency: Automatic merging of fragmented responses saves time, improving workflow and reducing the risk of errors.
  • Enhances transparency: Visualizing AI’s thought process in real-time offers valuable insights into the reasoning behind responses, which is crucial for debugging and optimization.
  • Provides flexibility: With custom merging rules and scripts, Apidog ensures that developers can work with any AI model and response format seamlessly.

Embrace Real-Time AI API Debugging with Apidog

Apidog’s SSE debugging feature is not just an enhancement; it’s a powerful tool that empowers developers to debug and interact with AI models more efficiently and transparently. By offering real-time merging of streaming responses and displaying the AI’s reasoning process, Apidog significantly simplifies the testing and development process for AI APIs.

button

How to Build Websites Using Lovable AITutorials

How to Build Websites Using Lovable AI

Discover how to build any website using Lovable in this comprehensive guide. Learn step-by-step processes, explore innovative features, and integrate free tools like Apidog to streamline API management.

Ashley Innocent

February 19, 2025

How to Stream LLM Responses Using Server-Sent Events (SSE)Tutorials

How to Stream LLM Responses Using Server-Sent Events (SSE)

In this guide, explore how to leverage Server-Sent Events (SSE) to stream Large Language Model (LLM) responses. From initiating the SSE connection to debugging streams, this article covers essential steps and tips for efficient real-time streaming.

Oliver Kingsley

February 18, 2025

How to Test gRPC APIs EfficientlyTutorials

How to Test gRPC APIs Efficiently

Testing gRPC APIs can be tricky, but with the right tools, developers can ensure smooth, secure, and efficient communication. Explore how Apidog empowers developers to debug gRPC APIs, from unary calls to bidirectional streaming, with features like automatic message generation and TLS support.

Oliver Kingsley

February 13, 2025