Skip to main content

Executing Test Scenario Manually

This section introduces parameters and settings for running test scenarios manually.

Functional testing refers to the execution of selected steps within the current test scenario in the specified order. By conducting functional testing of the test scenario, one can assess the actual performance of each API within the process of business flow.

By running functional tests on the test scenario, you can determine the actual performance of each API in the business process.

Set Tests Case Configuration

General Configuration

You can adjust the following settings in the tab on the right side of the test scenario:

  • Runtime Environment

    The request URL (prefix URL) for each API in the test steps, see Managing Runtime Environments for details.

  • Test Data

    The test scenario supports importing external test data sets. When the test scenario runs, the system will loop through all data sets in the data file and assign the data in the data sets to the corresponding variables, see Data-Driven Testing for details.

  • Loop Count

    The loop count refers to the number of complete executions of the entire test scenario.

  • Thread Count

    The thread count refers to the number of threads running the test scenario concurrently, with each thread executing all selected steps in order. Note that this is a feature in the Beta stage and may have unexpected issues.

  • Share

    Ticking "Share" next to the "Action" area left side will share any test reports automatically with other members within the project when each test scenarios finish running.

    Report-sharing records can be found in "Test Report" > "Share". For details on test reports, please refer to Test Reports.

If the current test scenario involves endpoints imported from other projects, you can refer to Manage the Runtime Environment of APIs for Other Projects.

Advanced Settings

In the advanced settings, you can also adjust:

  • Error Handling Policy

    When a test step encounters an error, such as an assertion error, data format validation error, or server error, the system will handle it according to the preset policy, providing the following 3 policies:

    • Ignore: Skip the current exceptional step and continue executing subsequent steps.
    • Continue: End the current loop test and jump to the next loop test.
    • End execution: Stop the test and mark it as failed.
  • Interval Pause

    Pause for a period of time before running the next step after the previous test step is completed.

  • Save Request/Response Details

    When enabled, the actual request, request response headers, and bodies of the API will be saved, but too much data may affect performance. You can choose to save "All Requests" or "Only Failed Requests ".

  • Save Variable Changes

    After the test scenario run is completed, save the changed environment/global variable values.

  • Use Global Cookie

    • If using the global cookie, all API requests in the test scenario will include the global cookie.
    • If not using the global cookie, each API request in the test scenario will include its own cookie.
  • Save Cookie to Global

    After the test scenario run is completed, save the changed cookie values to the global cookie in the current project.

After confirming the steps you want to execute and the runtime environment, click the "Run" button to start testing.

Modify the Execution Configuration in Design Mode

If you are in design mode, the relevant runtime configurations are collapsed to the right side of the "Run" button. Hover the mouse over this settings button to see the detailed runtime configurations for this test scenario.

Executing Functional Test

After running the functional test, you will enter the test scenario run page. The pie chart in the image below shows an overview of the run results and updates in real-time during the test scenario run; below the pie chart are the specific test steps being executed, displaying the execution status of each test step during the run.

After the functional test run is completed, you can click "More" in the API step to view various metrics and statuses of the API during the test, including the API name, request method, request URL, response status code, response time, response content, data validation, and assertion status. For detailed explanations, please refer to Viewing Test Reports.

Executing Test Scenarios with APIs from Other Projects

When running a test scenario that includes APIs imported from other projects, the request URLs in the runtime environments for those APIs will be obtained from the preset prefix URLs in the "Environment Association".

For example, if a test scenario specifies the use of the "Develop Environment", when a test step runs an endpoint imported from the "Claude API" project, it sends a request to the development environment URL specified in the environment association: https://api.anthropic.com/v1.

Other endpoints will send requests to the URLs preset in the "Production Environment".