Test Runs
Run the right tests at the right time to catch issues early and keep your releases on track.
Grouping tests effectively
How you group tests into runs affects both the usefulness of the results and how quickly you get feedback. Rather than running your entire test suite every time, consider creating focused runs that target specific areas.
Run by feature or flow
If you just deployed changes to the checkout process, create a run that includes only checkout-related tests. This gives you fast, targeted feedback on exactly what changed. The folder structure in your Tests tab makes it easy to select a group of related tests.
Create a core smoke test set
Identify the most critical paths in your application (sign in, core features, key transactions) and keep them in a dedicated folder. Use this set as a quick validation run after every deployment. It should be small enough to complete quickly but comprehensive enough to catch major issues.
Balance coverage and speed
Larger runs give you more coverage but take longer to complete. For time-sensitive situations like verifying a production hotfix, a focused run of 10 to 15 critical tests is more useful than a full suite of 200 tests that takes an hour to finish.
Environment strategy
Matching the right tests to the right environment helps you catch issues at the right stage:
- Development and test environments: run tests frequently as features are being built. These runs help catch issues early when they are cheapest to fix.
- Staging: run comprehensive test suites before releasing to production. This is your last line of defense and should cover as much as practical.
- Production: run smoke tests after deployments to verify the release went smoothly. Keep production runs focused on verification rather than exploratory testing.
Monitoring progress
While a test run is in progress, the detail view shows live status updates for each test. Here is what to watch for:
- Early failures: if several tests fail right at the start, it may indicate an environment issue (the application is down, wrong URL configured) rather than actual bugs. Check the environment configuration before investigating individual failures.
- Stuck executions: if a test stays in "running" status for an unusually long time, it might be waiting on something that is not responding. You can cancel individual test executions without affecting the rest of the run.
- The pass rate: as tests complete, the pass rate gives you a quick sense of overall health. A sudden drop compared to previous runs is worth investigating immediately.
After the run
Once a test run completes, take a few minutes to review the results before moving on:
- Look at failed tests first. Are they real bugs or environment issues?
- Check any tests marked as "error." These usually indicate a problem with the test itself or the test environment, not a bug in the application.
- If the results are clean, close the run to keep your list organized.
- If you need to share the results, download the PDF report before closing.
For more on interpreting test results, see Best Practices: Test Results.