Forums

Articles
Create
cancel
Showing results for 
Search instead for 
Did you mean: 

Two Approaches for AI Test Automation in Jira, and Why You Probably Need Both

Post de l'article 2 - F - v2.png

We keep talking about “AI for test automation” as if it were one thing. It’s not. Two fundamentally different approaches are available and ready for production: agentic AI test execution and AI-assisted scripting.

I see QA teams every day struggling to choose between them, when in reality the question isn’t which one to adopt, but how to combine them intelligently in your Jira and Xray workflows.

The availability of AI test case generators and AI test script generators, along with agentic AI test execution integrated into Jira, makes it more relevant than ever to understand these two approaches. This article explains what they are and how they complement each other, so your team can quickly reap the benefits of AI test automation.

The Two Approaches of AI Test Automation: A Quick Definition

Agentic AI Test Execution

No scripts. No locators. No code.

You provide a manual test case written in natural language (or in Gherkin), and a visual AI test agent takes over. Just like a human QA tester, it looks at the screen, reads what is displayed, clicks, types, scrolls, navigates, and checks the results. It plans its actions step by step and produces a detailed execution report with screenshots and a verdict for each step.

The output is a test result synced back to Xray, not code. The primary users are functional QA engineers and test analysts who design and own the test cases and, until now, had to either wait for an automation engineer to script their tests or run them manually.

AI-Assisted Scripting

Think Copilot-style code generation applied to test automation frameworks like Playwright, Cypress, or Selenium. The AI suggests locators, generates step definitions from your Gherkin scenarios, auto-heals broken selectors after UI changes, and accelerates script maintenance.

At its core, this is still scripting, but with a powerful co-pilot in the passenger seat. The output is code: maintainable, version-controlled, CI/CD-ready test scripts. The primary users are automation engineers who know how to write and maintain test code and want to do it faster.

What makes them fundamentally different

These two approaches solve different problems, serve different user profiles, and fit different moments in the test lifecycle.

Capture d'écran 2026-03-24 071224.png

The key insight: the boundary between the two approaches is less about the nature of the tests than about their role in the testing cycle:

  • In-sprint tests that are frequently updated because the UI or feature is still evolving? Those belong with agentic AI test execution.
  • Stable regression suites that run at every build and need to be fast in CI/CD? Those belong with AI-assisted scripting.

In Practice: A Natural Sequence in Jira/Xray

In a recent article in this group, I described how Rovo, Xray, and Lynqa can be chained end-to-end within Jira to cover a full AI-powered testing workflow from User Story to automated execution. Now, I want to go one level deeper and show how the two approaches of AI test automation (described above) flow naturally into each other over a development cycle.

Here is a concrete example using an e-commerce application.

Phase 0: AI-Powered Test Design

A QA team is building an e-commerce application. At the start of the sprint, the functional QA uses Xray’s AI Test Case Generation to draft Gherkin scenarios from the user stories covering the checkout flow: add to cart, enter address, payment, and confirmation. The AI suggests test titles and steps; the QA reviews, adjusts, and publishes. The test repository is populated in minutes rather than hours.

Phase 1: In-sprint Agentic AI Test Execution

Rather than waiting for an automation engineer to script those scenarios, Lynqa executes them directly from Xray with no modification required. Lynqa produces a step-by-step execution report with screenshots and a pass/fail verdict for each step, ready for the sprint review. 

This use of an AI test agent to run tests on the GUI speeds up test cycles, helps to improve test scenarios through AI feedback, and relieves QA testers of repetitive execution tasks during the sprint, allowing them to focus on higher-value tasks. Always available, codeless, and giving functional QA full ownership of their test execution. With this agentic approach to automated test execution, the entire test suite can be run in parallel.

Phase 2: Regression Suite Automation with AI-Assisted Script Generation

The checkout flow achieved stability after several sprints, with the initial scenarios being consistently validated through multiple Lynqa execution cycles and requiring only minimal adjustments. Crucially, the functional QA team must curate the set of tests: not all tests used for in-sprint validation (Phase 1) are suitable for the non-regression suite. To prevent an unmanageable increase in test volume, only specific, high-value tests are selected for automation.

This phase is the ideal time to leverage Xray's AI Automated Script Generation capability. The automation engineer uses this feature within Jira to convert the battle-tested Xray scenarios directly into Playwright scripts. Once added to the regression suite, these scripts run automatically with every build, eliminating the need for further manual execution by the team in regression test cycles.

What this means for your team

For functional QAs and test analysts, an AI execution agent like Lynqa allows direct automated execution of manual test suites. They can run their own Xray test cases automatically, without needing to understand Playwright or maintain locators. Their test design work becomes directly executable.

For automation engineers, AI-assisted scripting removes the most frustrating part of the job: maintenance. Less time fixing broken selectors after every UI change, more time on architecture and the quality of the regression suite itself.

A few questions to help you to think about where each approach fits in your team:

  •  Which tests get updated most frequently? These are the natural candidates for an AI execution agent.
  • Which features are stable and need systematic regression at every build? These are candidates for AI-assisted scripting.
  • Does your team have dedicated automation engineers with the capacity to maintain scripts? If not, an AI execution agent may cover the bulk of your execution needs and unblock your QA cycles considerably.

Conclusion

The question isn't "Agentic AI test execution or AI-assisted scripting?" It's about how to combine them so each one serves its purpose at the right moment in the test lifecycle.

What makes this particularly relevant right now is that Xray is building the design and scripting layers natively: AI Test Case Generation is already available, and AI Automated Script Generation is on its way. The execution layer in between is where Lynqa fits, with no scripts, no locators, and your existing Xray test cases run as-is against the live UI, with full evidence synced back to Xray.

Have you experimented with chaining AI tools inside Jira for your testing workflow? I'd love to hear how your teams are approaching this.

0 comments

Comment

Log in or Sign up to comment
TAGS
AUG Leaders

Atlassian Community Events