Test Runs and Reporting

Execute groups of tests together, track progress in real time, and share results with your team.

What is a test run

A test run is a batch execution of one or more tests. Instead of running tests individually, you group them into a run and execute them together. This gives you a single view of how a set of tests performed, along with aggregate statistics and a downloadable report.

Test runs are managed from the Test Runs tab in your project. Each run shows its overall status, progress, and the results of every test included in it.

Creating a test run

There are two ways to create a test run:

  • From the Tests tab: select the tests you want to include using the checkboxes in the tree view, then click the "Run" button in the bulk actions bar. This creates a new test run with your selected tests.
  • From a task: when you create a task of type "Execute tests," the system creates a test run automatically and includes the relevant tests.

When creating a run, you select the environment it should execute against. You can also give the run a custom name and description to help identify it later.

The test run view

Once a test run is created, its detail view shows everything you need to monitor and understand the results:

  • Summary card: shows the run name, environment, status, and who triggered it. A progress bar indicates how far along the execution is.
  • Stats grid: displays the total number of tests, how many are running, passed, failed, and the overall pass rate.
  • Test executions table: lists every test in the run with its individual status, assignee, start and completion times, and links to the execution task.

Execution statuses

Each test execution within a run goes through its own status lifecycle:

  • Pending: the test is queued and waiting to be executed.
  • Running: the AI agent is actively executing this test in the browser.
  • Passed: the test completed successfully and all checks met their expected outcomes.
  • Failed: the test completed but one or more checks did not match the expected result.
  • Error: the execution encountered a technical problem that prevented it from completing.
  • Skipped: the test was intentionally skipped during this run.
  • Cancelled: the execution was stopped before it could finish.
  • Rerun: the test has been re-executed after a previous attempt.
  • Blocked: the test could not run because a dependency or precondition was not met.

Test run lifecycle

A test run moves through the following statuses:

  • Pending: the run has been created and tests are queued for execution.
  • Running: one or more tests in the run are actively being executed.
  • Completed: all tests have finished (regardless of whether they passed or failed). The completion timestamp is recorded automatically.
  • Cancelled: the run was stopped before all tests finished. The cancellation timestamp is recorded.

A completed or cancelled run can also be archived. Archived runs are hidden from the default list and cannot be modified until they are unarchived. Archiving protects historical results from accidental changes while keeping the data available for reference and reporting.

Manual execution

Not every test needs to be run by the AI. The test executions table includes an assignee column where you can assign individual tests to team members for manual execution.

When a team member opens a test assigned to them, they see a manual execution interface. For structured tests, this shows each step with its expected result, and the tester marks each step as passed, failed, or skipped. For freeform tests, the instructions are displayed and the tester records the overall result.

Testers can also add notes during manual execution to capture observations or context about the results. The final result is submitted and recorded alongside AI-executed results in the same test run.

Execution history

When a test is re-executed within a test run (for example, after a failure is fixed), the previous attempt is preserved in the execution history. Each re-execution increments the attempt number and snapshots the previous result, including its status, progress, timestamps, and any notes.

You can view previous attempts by expanding a test execution in the test run detail view. This gives you a complete timeline of how a test performed across multiple attempts, which is useful for understanding intermittent failures or tracking improvement after fixes.

Each execution is linked to the task that performed it. You can navigate from an execution to its task to see the full step-by-step logs, screenshots, and video recordings captured during the AI agent's browser session.

Reports

Each test run has a "Download Report" button that generates a comprehensive PDF report. The report is generated asynchronously in the background, so you can continue working while it is being prepared. Once ready, you receive a download link that remains valid for one hour.

The PDF report includes:

  • A title page with the run name, environment, status, dates, and overall statistics.
  • A summary section with total tests, pass/fail counts, and the overall pass rate.
  • An execution summary table listing each test with its status, assignee, task reference, and timing.
  • Detailed sections for each test execution showing the test steps, expected and actual results, execution timeline with step-by-step logs, and any screenshots captured during execution.

Reports are useful for sharing results with stakeholders who do not use qtrl directly, for compliance documentation, or for archiving test results as evidence that testing was performed.

Archiving runs

Once you are done with a test run, you can archive it. Archived runs are hidden from the default list view but can be shown using the filter toggle. This keeps your active runs list clean while preserving historical data.

Archived runs are read-only: you cannot modify test executions, change the status, or delete the run until it is unarchived. This protects completed results from accidental changes.

The test run audit trail tracks all changes, including when runs were archived or unarchived, status transitions, and which user performed each action. This provides full accountability for the lifecycle of every test run.

For tips on getting the most out of test runs, see Best Practices: Test Runs.