How-To11 min read

How to get started with test automation in 2026

By qtrl Team · Engineering

If you're starting test automation in 2026, the first question isn't which tool to pick. It's which approach fits your team.

For years, the answer was simple. You picked a framework (Selenium for a long time, then Cypress, then Playwright), hired someone who knew it, gave them a few weeks, and let them build out your first suite. The tools were stable, the approach was understood, and your main worry was whether the tests would still be passing in six months.

That playbook still exists. It still works for some teams. But it's no longer the default, and for most growing companies it's the wrong starting point.

Path 1: DIY with a framework

This is the classic approach. Pick a framework (Playwright has mostly taken over, with around 33 million weekly npm downloads and still climbing), hire or repurpose someone who can write code, and build your suite from scratch. You own everything. Your tests, your infrastructure, your CI setup, your reporting, your flaky-test triage playbook.

The upside is total control. You can test anything the framework can reach, integrate with whatever tools you already have, and build exactly the abstractions your team wants. For teams with dedicated SDETs and a strong engineering culture, it's still a respectable choice, and we covered how to think about the framework decision in Playwright vs Cypress in 2026.

The downside is that it's a lot of work. You're not just writing tests. You're building and maintaining a test platform. Setting up parallel execution. Managing browsers. Running the grid. Writing page objects. Dealing with flakiness. Keeping reports readable as your suite grows. Then doing all of that again when someone quits and the knowledge walks out the door. We laid out the real numbers in the real cost of test automation.

Pick this path if your team has dedicated SDETs, the engineering bandwidth to own a test platform, and the stomach to maintain it as a long-term investment.

Path 2: Record-and-replay tools

The earlier no-code generation. Selenium IDE, Katalon, and the first wave of record-and-replay SaaS. You click through your app, the tool saves the steps, and replays them later.

These tools have a genuine strength: low entry cost. A QA analyst without coding experience can get something running in an afternoon. They pioneered the idea that test authoring shouldn't require a software engineer, and credit is due for that.

The limits show up later. Brittle selectors break every time the UI changes. The tests are hard to reason about because what you recorded isn't always what you meant. And most of these tools were designed before modern single-page apps, dynamic content, and shadow DOM became the norm. For a simple public-facing page they can still work. For a complex product, they usually struggle. Fine for a small team with a simple app who needs something cheap and fast. Probably not the answer if you're building anything serious.

Path 3: No-code AI-native platforms

The new generation. You describe what you want tested in plain language ("sign up, create a project, invite a teammate, check they can log in"), and an AI agent runs that instruction in a real browser. You review the results. You approve what's good. You refine what isn't.

This is the category that didn't exist a few years ago and is now becoming the default for teams that want to ship quality without building a test platform. The value is that the hardest parts of DIY automation (infrastructure, flakiness, selector maintenance, reporting) become somebody else's problem. You focus on what to test, not how to test it.

It's also the only category that takes advantage of what agentic testing actually enables. More on that in a moment.

The trade-off is that you're trusting the platform's judgement in some areas. That's fine if the platform is transparent about what it's doing and lets you set the guardrails. It's a problem if the platform expects you to trust it blindly. More on how to tell the difference below. This is the path most growing teams should be looking at first: automation yesterday, no infrastructure to build, and a way to keep pace with a product that won't stop shipping.

Path 4: Hybrid

Some teams land on a mix. A small framework-based layer for one or two specialty cases (performance testing, API contract tests, a weird internal tool that needs custom mocking), and a no-code agentic platform for everything else. This is increasingly common at mid-size companies that already had some framework investment and don't want to throw it away.

Hybrid works when you can name the specialty cases up front. It breaks down when the "small" framework layer quietly eats more surface area because the team keeps reaching for code out of habit. If you're going hybrid, be honest about what lives where and why, and set a rule that nothing new goes into the framework layer without a concrete reason.

Why agentic testing is changing the calculus

Here's the piece that makes 2026 different from 2021. Agentic testing isn't just a new tool category. It's a new way of writing and running tests.

In scripted automation, you describe how to do something: click this selector, wait for this element, assert this text. In agentic testing, you describe what you want to happen: "sign up as a new user, create a project, invite a teammate." The agent figures out the steps. If the UI changes next week, the agent adapts without you rewriting anything. If a step fails, the agent tries an alternative path before giving up.

That shift changes the calculus in three places.

Speed is the obvious one. You can express a test flow in minutes instead of hours. A non-engineer can do it. Your PM can do it. The barrier to writing good tests drops, and the people closest to the product can finally own their own coverage.

Maintenance is the less obvious one, and it's actually the bigger deal. Traditional suites rot. Every UI change breaks selectors. Every refactor generates a wall of false failures. Agentic tests absorb most of that churn because they're not hard-wired to a specific DOM structure. They know the intent and re-figure out the details each run. This matters more than usual right now, because AI coding tools are already breaking most traditional test suites.

And then there's exploration. Scripted tests only cover what you thought to script. An agent can explore your app the way a user would, find the paths you didn't anticipate, and surface problems you wouldn't have written a test for. That's new. And it only works when the agent has enough structure underneath to know what "correct" looks like.

All three of these benefits belong to the no-code AI-native category. They don't come for free with DIY frameworks, even the best ones. You can bolt some of them on with enough engineering work, but at that point you've built a product instead of a test suite.

What to look for in a modern test automation platform

If you're going the no-code agentic route, there are a few places where serious platforms separate themselves from the demos.

Start with authoring. Your team shouldn't need to learn a framework, a DSL, or any special syntax to write a test. Plain language should be enough to create one, and plain language should be enough to review one. Anyone on the team who understands what the product is supposed to do should be able to write a test. If the "no-code" tool quietly makes you drop into code for common cases, it isn't really no-code.

Infrastructure is the next one, and it's the one teams underestimate. Running tests means running browsers. Running a lot of tests means running a lot of browsers, in parallel, across environments, with secrets, retries, and reporting. Do that yourself and you've just given someone on the team a full-time second job. A modern platform gives you all of that as a managed service: no grids to babysit, no CI runners to pay for, no one quietly becoming "the CI person."

Then there's test management. Most legacy automation tools treat test cases, runs, plans, and reporting as separate problems. You end up with tests in one place, plans in a spreadsheet, and results scattered between a CI dashboard and a Slack channel. A modern platform brings cases, plans, runs, and reports into one system with full traceability from requirement to result. This sounds boring. It's the difference between "we think we tested that" and "here's the audit trail." We've written more about why it still matters in test management is not dead.

Compliance is getting urgent. If you're in the EU or sell to anyone who is, the EU AI Act comes into full force on August 2, 2026. It requires traceability, documentation, and proof that your AI systems are tested and monitored. If your platform doesn't produce audit trails, doesn't version its decisions, and doesn't let you export compliance evidence, you'll be doing that work by hand. Same goes for SOC 2, ISO 27001, and anything in a regulated industry.

Last one, and it matters more than the rest: governance. The best no-code agentic platforms don't ask you to trust them blindly. They let you review what the agent did before anything is official. They let you set autonomy levels per project or per team. They show you why the agent made each decision, not just the end result. If a platform's answer to "how do I know this test is right?" is "trust the AI," walk away.

How to choose

Short version. Ask yourself four questions.

  1. How much engineering bandwidth can you dedicate to the test platform itself? If the answer is "close to zero," don't pick DIY. You'll start, stall, and give up. Pick a no-code AI-native platform instead.
  2. How fast does your product change? If you ship every week or more, a traditional framework suite will fight you constantly. Agentic testing that absorbs UI churn is the only thing that'll keep up.
  3. Do you have compliance requirements? If yes, test management and audit trails are non-negotiable. Tools that only automate execution without managing the surrounding artifacts won't clear the bar.
  4. Does the person writing the tests understand the product better than they understand code? If yes, no-code is a strict upgrade. Let product and QA write tests. Let engineering focus on engineering.

Most growing teams will answer "low bandwidth, fast product, some compliance, yes" to these. For those teams, the answer is clear: no-code AI-native, embedded in a test management system, with real governance.

Start where your team actually is

The best test automation approach is the one your team will still be using in six months, not the one that sounded most impressive in the kickoff meeting. Teams that start small with a tool matching their reality end up with more tests, better coverage, and less burnout than teams that start big with a platform that needs dedicated care and feeding.

If you're just starting, pick the approach that gets your first test running this week, your first full suite running this quarter, and still works when you've shipped twenty more features. That's almost never a DIY framework build. It's usually the tool that lets you write tests in plain language, run them in real browsers, manage them in the same place you plan releases, and hand your auditors a clean report when they ask.


If that sounds like what you're looking for, it's the shape qtrl is built around: no-code test authoring, managed infrastructure, test management in the same system, and governance controls for when you're letting AI agents do real work on your app. Start free when you're ready.

Have more questions about AI testing and QA? Check out our FAQ