You've built your API, and it passes all functional tests. But what happens when 100 users try to register at the exact same second? Will it slow down? Will it crash?Performance Testing is the art of simulating high-traffic scenarios to identify bottlenecks before your users do. Apidog makes this incredibly easy by repurposing your existing test scenarios as performance tests.Unlike functional testing (Does it work?), performance testing asks: How well does it work under load?Key metrics we care about:Concurrency: How many virtual users (VUs) are active at once.
Response Time (Latency): How long a request takes (e.g., Average, P99).
Throughput (TPS/RPS): Transactions/Requests Per Second.
Error Rate: Percentage of failed requests.
In Apidog, you don't need to write new scripts. You just take a "Test Scenario" and run it in Performance Mode.Steps#
1.
Open your Test Scenario (e.g., "User Lifecycle").
2.
Run the scenario, but this time switch the tab to Performance Test instead of Functional Test. (Or click the "Performance" icon).
3.
Configure the Load Parameters:Virtual Users (concurrency): How many simultaneous threads? Start small (e.g., 20).
Duration: How long to run? (e.g., 1 minute).
Ramp-up Period: How fast to start users? (e.g., 0 to 20 users over 10 seconds).
Running and Monitoring#
Click Run. Apidog will spawn the virtual users and start hammering your API according to your scenario logic.You will see a real-time dashboard:TPS Chart: Is the throughput stable or fluctuating?
Response Time Chart: Is latency increasing over time? (A sign of memory leaks or database locks).
Error Rate: Are we seeing 500 errors?
Analyzing the Report#
Once finished, you get a detailed report.Key Indicators#
1.
Avg Response Time: If this is higher than 500ms, your API might feel "laggy".
2.
99th Percentile (P99): This is crucial. It means "99% of requests were faster than this". If Avg is 200ms but P99 is 5s, you have a stability problem.
3.
Transactions Per Second (TPS): This is your system's capacity "speed limit".
Real Case: User Registration + Login#
Imagine we want to test the entire onboarding flow under load.Concurrency: 50 Virtual Users.
TPS: 500 (Good capacity).
Error Rate: 0.5% (Acceptable? Maybe not).
Bottleneck Found: We notice that POST /login is 3x slower than POST /users.
Fix: We discover a missing database index on the email column used during login. After adding it, latency drops by 90%.
Best Practices#
1.
Reuse Scenarios: Don't build separate perf scripts. Use your integration tests.
2.
Start Low: Start with 1 user, then 10, then 50. Don't jump to 1000 immediately.
3.
Isolate Environment: NEVER run performance tests on Production unless you know exactly what you are doing. Use a Staging env.
4.
Watch Dependencies: Often the bottleneck isn't your code, but the Database or a 3rd party API (like Stripe/Twilio) you are calling.
Key Takeaways#
Understand performance metrics (RPS, Latency)
Configure load tests with virtual users
Analyze reports to find bottlenecks
What's Next#
We've tested manually, automatically, and under load. The final piece of the puzzle is analyzing the results over time and sharing them with the team.In the next chapter, we'll look at Test Reports and Analysis. Modified atΒ 2025-12-25 09:53:56