Forums

Articles
Create
cancel
Showing results for 
Search instead for 
Did you mean: 

Agentic AI Test Execution in Xray

How does the agentic AI execution of manual tests and Gherkin-style scenarios speed up test cycles in Xray?

On the surface, "automatic execution of manual tests" sounds like an oxymoron. The industry has always operated on a binary: manual means a human tester; automatic means a script, a framework, and an automation engineer. For decades, "manual" has been synonymous with human intervention.

But agentic AI is turning this contradiction into a practical capability. At the heart of this shift is Lynqa for Xray, an AI agent designed to run manual tests directly on the GUI. It understands test procedures, navigates the application interface, verifies results, and delivers a detailed execution report without a single line of automation code.

Disclosure: I am co-founder of Smartesting, the company behind Lynqa.

Lynqa - Agentic AI Test Execution - v2.png

If you are wondering how agentic AI test execution compares to AI-assisted scripting, and how to decide which one your team actually needs, I explored both approaches in detail in Two Approaches for AI Test Automation in Jira, and Why You Probably Need Both. The short answer: they solve different problems and work best when combined.

The Sprint Bottleneck: Why "Manual" Needs an AI Boost

Every QA team using Xray knows the pattern: as the sprint nears completion, the open test execution fills up with untouched manual tests, testers are stretched thin, and feedback to developers slows to a crawl. This is the "Manual Testing Debt."

The usual responses, hiring more testers or rushing to automate new features, rarely work within a single sprint. Traditional test automation simply takes too long to develop alongside the feature it's meant to validate.

This is where GUI AI agents change the game. 

What Are GUI AI Agents?

GUI AI agents interact with software by visually interpreting screen interfaces and operating a virtual mouse and keyboard. Instead of relying on backend APIs, these AI agents execute complex digital workflows exactly as a human user would:

  1. GUI-Based Actions: It interacts with the graphical interface exactly as a human would, opening browsers, scrolling through menus, filling in fields, and navigating complex clients such as ERP systems.
  2. Visual Perception: It monitors progress by analyzing the screen at each step, identifying the areas affected by an action or verification.

The Feedback Loop: Thought and Action

GUI AI Agents.png

The agent operates in a continuous feedback loop that combines reasoning with action:

  • Interpretation: It reads the test steps as written in your Xray test case and builds a logical action plan.
  • Perception: It analyzes the current screen to decide which interaction (click, type, drag) to perform.
  • Monitoring: After every action, it checks the UI state to verify the expected result.
  • Communication: If the agent encounters an ambiguity or a high-risk step, it pauses to ask the user for clarification before proceeding.

This means the agent doesn't blindly follow a script; it reacts to the application's actual behavior.

Lynqa: Agentic AI Inside Your Xray Test Management Tool

While general-purpose AI agents are appearing in consumer chatbots, Lynqa is purpose-built for the testing ecosystem. By integrating directly into Xray, it transforms how teams handle manual workloads within a sprint.

Screenshot.png

How it speeds up the cycle:

  • Zero Scripting Overhead: If you have a written test case or a clearly defined User Story, the agent can execute it immediately, without waiting for an automation engineer. This enables “Day 1” testing of new features.
  • Adaptive Execution: When a button moves, a CSS class is renamed, or a color shifts, traditional automation breaks. The AI agent sees the “Submit” button regardless of underlying code changes, removing the maintenance tax that halts progress.
  • Analysis-Ready Reporting: The agent doesn't just return “Pass” or “Fail.” Each step verdict, screenshot, and failure rationale is written back directly into the Xray test execution record in Jira, so the evidence lives alongside the test case with no manual copy-paste required.

To see this in action end-to-end, from writing a User Story all the way through to automated execution via Lynqa, I walked through a complete example connecting Rovo, Xray, and Lynqa inside a single Jira environment in From User Story to Automated Test Execution: A Virtuous AI Cycle in Jira.

What This Means for QA Teams

For QA teams still running manual test cycles, agentic AI does not change what you test. It changes who, or rather what, executes them. The practical benefits are immediate and do not require any prior investment in automation infrastructure.

You can start running automated executions on day one. If your test case is written in natural language or Gherkin, Lynqa can execute it against the live application immediately: no scripts, no locators, no code changes, and no waiting for an automation engineer to implement the test before it can run. This is particularly valuable during a sprint, when new features are being validated in parallel with development and the test suite is still evolving.

The agent handles UI changes so you don’t have to. Because the agent perceives the interface visually rather than relying on the underlying element identifiers that traditional automation scripts depend on, it adapts naturally when a button is repositioned, a label is renamed, or a page layout shifts. This removes the maintenance burden that makes traditional automation so expensive to sustain alongside an actively developed product.

Execution results are explainable, not just binary. Rather than returning a simple pass or fail, the agent produces a step-by-step execution report with screenshots, a verdict for each step, and a written rationale explaining why a step failed or was only partially validated. With transparent execution logs and detailed analysis for each verdict, therefore the QA tester stays in control. 

Taken together, these properties reframe what is possible for a functional QA team. The sprint no longer ends with a backlog of manual checks that testers race through before the release window closes. The entire test suite can run in parallel, in Xray, using the test cases the team already wrote, and the results feed directly back into the same tool. The testers are not replaced; they are freed from repetitive execution to focus on test design, exploratory work, and the higher-value judgment calls that only a human can make.

Have you tried running your Xray test cases with an AI execution agent? I would love to hear how your team is approaching this in the comments.

 

0 comments

Comment

Log in or Sign up to comment
TAGS
AUG Leaders

Atlassian Community Events