Apidog is an API collaboration and development platform that helps backend, frontend, and QA teams work more efficiently. For QA engineers, Apidog offers a range of testing features, including unit testing, integration testing, data-driven testing, performance testing, regression testing, CI/CD, and scheduled monitoring.
Let's walk through the complete workflow for QA engineers using Apidog, starting with the basics.
Initial Preparation
The first step to start API testing is to obtain detailed API documentation. Apidog makes this easy by allowing you to import existing API documentation directly. This eliminates the need to enter each endpoint manually. Simply go to the Project Settings
-> Import Data
and choose the appropriate data format to import.
Unit Testing
Unit testing is a crucial part of the testing process. It ensures that each API works as expected under different conditions. QA engineers write detailed test cases to cover various scenarios, such as normal, exceptional, and edge cases. This ensures the API handles all possible inputs.
For example, the endpoint "Query Pet Details" requires a pet ID as a parameter to retrieve the pet's details. QA engineers can enter petId=123
on Apidog’s "Run" page and send the request. Then, they can check if the correct pet details are returned.
To automate the test, assertions can be added in the post-processors. These assertions check if the correct pet ID appears in the response under the data
field. By using the JSONPath expression $.data.id
and setting the assertion to Exists
, the pet ID can be extracted and validated automatically.
You can also extract the assertion directly from the returned response.
Once these steps are complete, the test case can be saved. Additional normal and abnormal test cases can be created as needed, such as "Pets Available for Sale", "Pets Sold", "Record Not Found", and "Incorrect ID Format". These saved test cases can be quickly and easily run in future regression testing to verify the stability of core functionalities.
Integration Testing
Testing individual APIs is important, but real applications often require multiple APIs to work together. Integration testing ensures that these APIs interact correctly. It simulates real user actions and tests the data exchange and workflow between APIs.
For example, in a pet purchase process, users might browse pet lists, add pets to the cart, place an order, make a payment, and view the order details. QA engineers can create a test scenario in Apidog, adding test cases for each endpoint involved in the process.
Data transfer between endpoints is crucial for ensuring the testing process is complete. Take the pet purchase flow as an example. QA engineers can pass the order ID between steps in two ways:
- Method 1: After running the "Create Order" endpoint, save the generated order ID as a variable and use it in the subsequent payment and order query endpoints.
- Method 2: Directly use the return value from the "Create Order" endpoint in the payment and order query endpoints (this method is easier and recommended).
For batch operations, like adding multiple pets to the cart, a ForEach
loop can be added. Set the loop array to the pet list.
The pet ID will be automatically inserted, making bulk operations easier.
After setting everything up, run the test scenario. This will generate a detailed test report. QA engineers can then quickly find and fix any issues.
Data-Driven Testing
In some cases, the same endpoint needs to be tested with multiple sets of data. Apidog’s data-driven testing feature helps with this. QA engineers can import a CSV file containing different data sets. These sets can then be used in the test scenario to run tests automatically.
Here is how it works: QA engineers create a new test scenario, add the "Create Pet Information" endpoint, and import the CSV file into the Test Data
. Each row(with the first row being the variable name) in the CSV file represents a set of data that can be used in the API request.
Reference these variables in the endpoint request's JSON body to map the data from the CSV file.
Select the appropriate test data and environment, and then run the test. Apidog will automatically execute the endpoint cases for each data set, generating execution status and reports for every round.
By automating bulk testing in this way, Apidog significantly enhances both the efficiency and accuracy of the testing process.
Performance Testing
Once the basic functionality of an endpoint is validated, the next step is performance testing. This checks how the system performs under heavy traffic. Apidog provides performance testing tools that simulate multiple virtual users to test the system’s response.
For example, in an ordering scenario, QA engineers can set the number of virtual users (e.g., 10), test duration, and ramp-up time (e.g., 1 minute). During the test, Apidog generates real-time charts showing key metrics like requests per second, server response time, and error rates. This helps QA engineers identify performance bottlenecks and optimize accordingly.
Regression Testing
As systems evolve, new features may affect existing functionality. Regression testing ensures that new updates don’t break core features. In Apidog, QA engineers can create a regression test folder and add key test scenarios. Before each release, they can run these tests in bulk to ensure that everything works as expected.
CI/CD Integration
In modern development workflows, Continuous Integration (CI) and Continuous Deployment (CD) are crucial for maintaining fast release cycles and high-quality code. Apidog integrates seamlessly with CI/CD tools like Jenkins, allowing automated tests to run as part of the build pipeline.
QA engineers can go to CI/CD
page in a specific test scenario, select the appropriate environment and test data, and enable the notifications for test results (supports various notification methods such as Email, Slack, Webhook and Jenkins, etc.). Next, select the corresponding CI/CD tool, copy the generated command, and configure it in Jenkins or another build tool.
It is important to generate and configure the Access Token
during the integration process to ensure smooth authentication and communication with Jenkins. Once configured, Apidog will automatically run the tests each time a build is triggered and send the test results to the team via the selected notification method, improving collaboration efficiency.
Scheduled Tasks for API Testing
Sometimes, it’s necessary to run tests on a regular basis to ensure the system remains stable. Apidog’s scheduled tasks feature allows QA engineers to automate this process. They can schedule tests to run at specific times and get notifications about the results.
Before using this feature, QA engineers need to install Apidog's Runner on the server. This ensures that the scheduled tasks can run independently on the server without depending on the local computer being on. After installation, QA engineers can create a new scheduled task, select the test scenario, set run mode and server, and enable notifications.
After the task runs, Apidog records the results and sends them to the team, helping to quickly spot and fix any issues.
Final Takeaways
Apidog provides a comprehensive suite of features that significantly streamline the API testing lifecycle, from initial preparation to continuous monitoring. With robust capabilities in unit, integration, data-driven, performance, regression, and CI/CD testing, as well as scheduled tasks, Apidog ensures efficient, reliable, and scalable API testing. By integrating Apidog into your testing workflows, teams can save time, reduce errors, and deliver high-quality APIs faster, leading to improved system stability and performance.