Skip to main content

Manual Trigger Test Scenario Instances

This section introduces parameters and settings for running test scenarios manually.

Functional testing refers to the execution of selected steps within the current test scenario in the order specified. By conducting functional testing on the test scenario, one can assess the actual performance of each API within the business process flow.

Functional testing refers to executing the selected steps in the current test scenario in the set order. By running functional tests on the test scenario, you can determine the actual performance of each API in the business process.

Set Tests Case Configuration

General Configuration

You can adjust the following settings in the tab on the right side of the test scenario:

  • Runtime Environment

    The request URL (prefix URL) for each API in the test steps, see Managing Runtime Environments for details.

  • Test Data

    The test scenario supports importing external test data sets. When the test scenario runs, the system will loop through all data sets in the data file and assign the data in the data sets to the corresponding variables, see Data-Driven Testing for details.

  • Loop Count

    The loop count refers to the number of complete executions of the entire test scenario.

  • Thread Count

    The thread count refers to the number of threads running the test scenario concurrently, with each thread executing all selected steps in order. Note that this is a feature in the Beta stage and may have unexpected issues.

  • Share

    After clicking the "Share" option on the right side of the advanced settings, the test report after each test scenario run will be automatically shared with other members in the project. You can view all test reports shared within the team in the "Share" tab of "Test Reports". For details on test reports, please refer to Viewing Test Reports.

If the current test scenario steps include APIs imported from other projects, you can refer to [Managing External Project API Runtime Environments].

Advanced Settings

In the advanced settings, you can also adjust:

  • Error Handling Policy

    When a test step encounters an error, such as an assertion error, data format validation error, or server error, the system will handle it according to the preset policy, providing the following 3 policies:

    • Ignore: Skip the current exceptional step and continue executing subsequent steps.
    • Jump to Next Loop: End the current loop test and jump to the next loop test.
    • Stop Running: Stop the test and mark it as failed.
  • Interval Pause

    Pause for a period of time before running the next step after the previous test step is completed.

  • Save Request/Response Details

    When enabled, the actual request, request response headers, and bodies of the API will be saved, but too much data may affect performance. You can choose to save "All Requests" or "Failed Requests Only".

  • Save Variable Changes

    After the test scenario run is completed, save the changed environment/global variable values during the test to the project's environment/global variables.

  • Use Global Cookie

    • If using the global cookie, all API requests in the test scenario will include the global cookie.
    • If not using the global cookie, each API request in the test scenario will include its own cookie.
  • Save Cookie to Global

    After the test scenario run is completed, save the changed cookie values to the global cookie in the current project.

After confirming the steps you want to execute and the runtime environment, click the "Run" button to start testing.

Executing Functional Test

After running the functional test, you will enter the test scenario run page. The pie chart in the image below shows an overview of the run results and updates in real-time during the test scenario run; below the pie chart are the specific test steps being executed, displaying the execution status of each test step during the run.

After the functional test run is completed, you can click "More" in the API step to view various metrics and statuses of the API during the test, including the API name, request method, request URL, response status code, response time, response content, data validation, and assertion status. For detailed explanations, please refer to Viewing Test Reports.

Executing Test Scenarios with APIs from Other Projects

When running a test scenario that includes APIs imported from other projects, the request URLs in the runtime environments for those APIs will be obtained from the preset prefix URLs in the "Environment Association".

For example, if the test scenario specifies using the "Production Environment", when the test step runs an API imported from the "Medical Q&A Information System" project, it will send the request to the production environment URL specified in the environment association: https://api.anthropic.com/v1. Other APIs will send requests to the URLs preset in the "Production Environment".