Test Management
A well-organized test suite is easier to maintain, faster to work with, and gives you better coverage visibility.
Organizing tests into folders
The folder structure in the Tests tab is your primary tool for keeping tests organized. A thoughtful structure pays off as your test suite grows, making it easy to find tests, select groups for execution, and understand what areas of your application are covered.
Group by feature or area
The most common and effective approach is to organize tests by the feature or area of the application they cover. For example, you might have top-level folders for "Authentication," "Checkout," "User Profile," and "Admin Panel." Within each, you can nest subfolders for specific flows if needed.
Keep the structure shallow
Deeply nested folder structures become hard to navigate. Try to keep your hierarchy to two or three levels at most. If you find yourself needing deeper nesting, it may be a sign that a folder should be split into separate top-level folders instead.
Use clear, descriptive names
Folder names should immediately tell you what tests are inside. Avoid generic names like "Tests Group 1" or "Misc." Instead, use names that describe the feature or flow, such as "Password Reset Flow" or "Product Search and Filtering."
The review workflow
Reviewing tests is one of the most important parts of maintaining a healthy test suite. When the AI generates or updates tests, they need human review before becoming part of your active suite.
Review promptly
Tests in "pending review" status are waiting for your attention. Letting them accumulate makes the review process feel overwhelming and delays the value you get from generated tests. Try to review tests shortly after they are generated.
What to look for during review
When reviewing a test, consider these questions:
- Does the test name clearly describe what it verifies?
- Are the steps logical and in the right order?
- Are the expected results specific enough to catch real issues?
- Does the test cover the scenario it claims to, or is it testing something slightly different?
- Are preconditions accurately stated?
When to approve vs. reject
Approve a test when it accurately captures a meaningful scenario with clear steps and expectations, even if it is not perfect. Minor wording improvements can be made by editing the test directly.
Reject a test when the approach is fundamentally wrong, when it duplicates an existing test, or when the scenario it covers is not relevant. Rejection signals that the test should be rethought rather than just tweaked.
Maintaining test quality over time
A test suite is not something you build once and forget about. Applications change, and tests need to change with them.
Prune outdated tests
When a feature is removed or significantly changed, review the tests that cover it. Delete tests that are no longer relevant rather than leaving them in the suite to fail repeatedly. Failing tests that nobody acts on erode trust in the test results.
Keep descriptions current
If a test's steps no longer match how the application works, update them. The versioning system helps here: after execution, the AI may suggest updated steps based on what it observed. Review these suggestions and approve them if they accurately reflect the current behavior.
Use priority levels meaningfully
Assign priority levels based on the business impact of what the test covers, not how complex the test is. Critical priority should be reserved for tests that verify core business functionality, such as the ability to sign up, make a purchase, or access essential features.