Tasks
Tasks are how you communicate with qtrl's AI agents. You describe what you want, and the agent executes it.
What is a task
A task is a high-level instruction you give to the AI agent. Rather than writing code or scripting browser interactions, you describe your intent in natural language. The agent interprets your instruction, plans the steps, and carries them out in a real browser.
Tasks are created from the Tasks tab within your project. You provide a description, choose a task type, select an environment, and submit. The agent picks up the task and begins working on it immediately.
Task types
When creating a task, you choose a type that tells the agent what kind of work you need done. Each type serves a different purpose:
Generate test cases
This is the most common starting point. You describe a feature, a user flow, or an area of your application, and the agent explores it in the browser and creates structured test cases based on what it finds. The generated tests appear in your Tests tab as drafts, ready for your review.
For example, you might write: "Generate test cases for the checkout flow, including adding items to cart, applying discount codes, and completing payment."
Execute instructions
Use this when you want the agent to perform a one-time action without creating a persistent test. This is useful for quick checks, exploratory testing, or verifying something specific after a deployment.
You can also choose to save the result as a test if the execution turns out to be something you want to repeat. When creating the task, there is an option to mark it as a one-time execution that will not be stored.
Execute tests
This type runs a batch of existing approved tests. It is typically triggered through test runs rather than created manually. When you start a test run, the system automatically creates execution tasks for each test in the run. Each execution task is linked to its corresponding test execution, so you can navigate between them to see the full picture: the test run shows aggregate results, while the task shows the detailed step-by-step logs and screenshots from the AI agent's browser session.
Explore
Exploration tasks tell the agent to navigate your application and learn about it. The agent browses through pages, interacts with elements, and builds up its understanding of how your application works. This knowledge is stored in Memory and helps the agent perform better on future tasks.
Exploration is especially valuable when you first set up a project or when significant changes have been made to the application.
Review test
The agent reviews an existing test case for quality, completeness, and clarity. It can suggest improvements to test steps, identify missing scenarios, or flag potential issues with the test structure.
Task statuses
As a task progresses through its lifecycle, its status updates to reflect where it stands:
- Pending: the task has been created and is waiting to be picked up by an agent.
- Running: the agent is actively working on the task.
- Completed: the task finished successfully.
- Warning: the task completed but with some issues that may need attention.
- Failed: the task could not be completed due to an error in the tested application or an unexpected situation.
- Error: the task encountered a system-level error during execution.
- Cancelled: the task was stopped before completion, either manually or by the system.
Credit consumption
Each task consumes credits from your organization's subscription. The number of credits used depends on the complexity and duration of the task. After a task completes, its credit consumption is displayed as a badge next to the task name.
For task groups (see below), the group header shows the total credits consumed across all subtasks, and each subtask also shows its individual consumption.
Task groups
Some tasks are complex enough that the system breaks them into subtasks. For example, a "Generate test cases" task might create several subtasks, each responsible for generating tests for a specific part of the application.
When a task has subtasks, you can expand it in the Tasks tab to see each subtask individually, along with its own status and logs. The parent task's status reflects the overall progress of all its subtasks.
Monitoring tasks
The Tasks tab shows all tasks for your project, with the most recent at the top. You can filter tasks by environment, type, or status to find what you are looking for.
Each task can be expanded to see its detailed execution logs. These logs show what the agent did step by step, which is useful for understanding the results or diagnosing issues when something does not go as expected.
You can cancel a running task at any time, and for failed tasks, there are options to retry or re-trigger the task with the same configuration.
For guidance on writing task descriptions that produce the best results, see Best Practices: Creating Tasks.