As AI technology continues to evolve, the ability to interact with large language models (LLMs) in real time has become essential for developers and teams working with AI-driven APIs. Models like OpenAI, Gemini, and Claude support streaming output, allowing users to see AI responses as they’re generated. This eliminates long wait times and allows for more dynamic, efficient interactions.
Streaming output typically uses the SSE (Server-Sent Events) format, which ensures continuous response delivery. This approach provides a more interactive way to engage with AI models in real time. Apidog, a leading API development tool, has been at the forefront of supporting SSE debugging. With the growing use of AI APIs, Apidog has enhanced its SSE debugging capabilities to better serve AI API endpoints, enabling developers to view AI responses as they’re generated. This advancement offers significant improvements, especially when working with complex models.
In this article, we'll explore how Apidog's enhanced SSE debugging feature can revolutionize the way developers test and interact with AI APIs.
Three Steps to Test LLM APIs with Apidog SSE Debugging
Apidog’s upgraded SSE debugging feature allows developers to see AI model responses as they are streamed in real time. It also automatically merges fragmented data into clear, readable text, making it easier to understand the AI's thought process—especially when dealing with complex models like DeepSeek R1.
Here’s how you can get started with this powerful feature:
Step 1: Create an HTTP Request
Ensure that your Apidog version is 2.6.49 or newer.
Start by opening Apidog and creating a new HTTP project. Add a new endpoint for any AI models you want to test, and configure the API keys within the request settings.

For example, to interact with DeepSeek’s API, you can copy the following cURL request into the endpoint path field.
Note: The stream field must be set to true
to enable SSE responses.
curl https://api.deepseek.com/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer {{API_KEY}}" \
-d '{
"model": "deepseek-chat",
"messages": [
{"role": "user", "content": "Write a Python code to sum the numbers from
1 to 100."},
],
"stream": true
}'

Apidog will automatically populate the necessary settings.


Step 2: Send the Request
Upon sending the request, Apidog automatically checks the response’s Content-Type
. If it contains text/event-stream
, Apidog will parse the response as SSE events and stream the output accordingly, allowing you to see the data unfold in real time.

Step 3: View Real-Time Responses
Apidog’s Timeline view displays the streaming response content as it’s received. The system automatically merges fragmented data into readable text, presenting it in the response panel as the AI processes and generates it.

Customizing SSE Debugging Rules in Apidog
In some cases, Apidog’s built-in Auto-Merge
feature may not work as expected, especially when handling custom AI models or non-standard response formats. To address this, Apidog allows you to customize how responses are processed using JSONPath Extraction Rules or Post-Processor Scripts
Configuring JSONPath Extraction Rules
When an SSE response is in JSON format but doesn’t follow the default recognition rules (like those for OpenAI, Claude, or Gemini), you can set up JSONPath to extract the data you need.
For example, if you receive the following raw SSE response:
data: {"choices":[{"index":0,"message":{"role":"assistant","content":"H"},"logprobs":null,"finish_reason":"stop"}]}
data: {"choices":[{"index":0,"message":{"role":"assistant","content":"i"},"logprobs":null,"finish_reason":"stop"}]}
To extract the content of the message.content
field, you would configure JSONPath like this:
$.choices[0].message.content
This will pull the content: Hi
. With JSONPath, you have complete control over how Apidog handles and extracts data from your responses.
Using Post-Processor Scripts for Non-JSON SSE
For responses that aren’t in JSON format, such as plain text or XML, Apidog gives you the option to write Post-Processor Scripts. These scripts let you process and extract data from SSE streams, giving you the flexibility to handle any data format that doesn’t match traditional JSON structures.
If you're working with a response format that isn't supported, you can also contact Apidog’s technical support team to request built-in support for the specific format.
With these customization options, Apidog ensures that you can tailor the debugging experience to suit your unique API testing needs.
Key Benefits of Apidog’s SSE Debugging for AI Models
Apidog’s innovative SSE debugging functionality brings several advantages to developers working with AI APIs. Let’s explore some of the key benefits:
- Real-Time Response Viewing: The ability to see responses unfold in real-time improves the efficiency of debugging, saving time spent waiting for full API responses.
- Automatic Merging of Responses: Apidog automatically merges streaming fragments into readable text for compatible AI models, such as those following the OpenAI, Gemini, or Claude formats.
- Thought Process Visualization: For inference models like DeepSeek R1, Apidog even displays the model’s reasoning process in real-time. This provides a more transparent view of how the model generates its responses, helping developers fine-tune and improve the interaction.

- Customizable Merging Rules: Apidog provides flexibility by allowing developers to define their own merging rules when the automatic merging feature fails. This ensures a more tailored solution, accommodating various response formats.

Why Apidog’s SSE Debugging is a Game-Changer for AI Development
With the rise of AI-driven applications, developers need tools that can handle complex, real-time data interactions. Apidog’s SSE debugging feature is a game-changer because it:
- Streamlines AI model testing: Real-time visualization of responses and thought processes makes it easier for developers to test and refine AI model interactions.
- Boosts efficiency: Automatic merging of fragmented responses saves time, improving workflow and reducing the risk of errors.
- Enhances transparency: Visualizing AI’s thought process in real-time offers valuable insights into the reasoning behind responses, which is crucial for debugging and optimization.
- Provides flexibility: With custom merging rules and scripts, Apidog ensures that developers can work with any AI model and response format seamlessly.
Embrace Real-Time AI API Debugging with Apidog
Apidog’s SSE debugging feature is not just an enhancement; it’s a powerful tool that empowers developers to debug and interact with AI models more efficiently and transparently. By offering real-time merging of streaming responses and displaying the AI’s reasoning process, Apidog significantly simplifies the testing and development process for AI APIs.