Skip to main content

Executing Performance Tests Manually

Performance testing can send large-scale service requests to an API to detect performance bottlenecks, stability, expose potential risks under pressure, and ensure the API can run stably and respond to requests under high load.

info

Currently, the performance testing feature is in the Beta stage, and there may be unexpected issues. If you encounter any problems, please join the Technical Support Discord and provide feedback.

Set Configuration Items

Before running a performance test, you need to specify the runtime environment and test data (optional) for the test scenario, as well as configure the performance test settings.

  • Runtime Environment

    The runtime environment in the test scenario inherits from the current project's environment.

  • Test Data

    After associating test data, virtual users will use the variables defined in the test data to execute requests. You can choose to run in "Random Match" or "Sequential Match" mode.

    • Random Match: Each virtual user randomly selects a row of data from the test data to run. In this mode, all virtual users will select one test data row and execute the performance test.
    • Sequential Match: Each virtual user selects a row of data from the test data in order. Note: If Number of Virtual Users > Number of Test Data Rows, virtual users exceeding the number of test data rows will not start the performance test.

The performance testing module provides the following three configurable test items:

  • Virtual Users (Concurrent Users)

    Currently supports up to 100 virtual users. Within the specified test time, it simulates online users running the test scenario in parallel repeatedly.

  • Test Duration

    The total run time of the performance test. During this period, each virtual user will continuously loop through all APIs in the test scenario.

  • Ramp-up Duration

    Typically, a large number of users do not access the service instantly, but rather the requests increase over time. To simulate this process, the ramp-up time means that within the first X minutes (X is a preset value), the number of parallel users is linearly added until reaching the preset number of virtual users (concurrent users). If X is set to 0, it means all virtual users are added instantly at the start of the test.

Run Performance Test

After triggering the performance test, an intuitive visualization panel will display key metrics such as Total Requests for each API, Avg Throughput, Avg Response Time, Maximum/Minimum Response Time, and Error.

Within a unit of time, only one performance test can be run for a project. If there is a higher priority test, click the "Terminate" button in the top right corner.

View Test Process

During the performance test, you can move the mouse over the test chart to view the test details for each time period in real-time.

Click "Error" to check failed requests for the API and analyze possible error causes. You can also filter API request in the filter bar.

In the performance test, a large number of API requests will be sent, so only failed requests will be categorized and displayed statistically. Detailed error information and request details for each API will not be recorded. If you find unexpected errors, please first run the "Functional Test" and resolve all issues before running the "Performance Test".

View Test Report

Click the "Test Reports" tab to view historical test reports for the current test scenario. Read here to learn more about how to analyze performance test reports.