How does the agentic AI execution of manual tests and Gherkin-style scenarios speed up test cycles in Xray?
On the surface, "automatic execution of manual tests" sounds like an oxymoron. The industry has always operated on a binary: manual means a human tester; automatic means a script, a framework, and an automation engineer. For decades, "manual" has been synonymous with human intervention.
But agentic AI is turning this contradiction into a practical capability. At the heart of this shift is Lynqa for Xray, an AI agent designed to run manual tests directly on the GUI. It understands test procedures, navigates the application interface, verifies results, and delivers a detailed execution report without a single line of automation code.
Disclosure: I am co-founder of Smartesting, the company behind Lynqa.
If you are wondering how agentic AI test execution compares to AI-assisted scripting, and how to decide which one your team actually needs, I explored both approaches in detail in Two Approaches for AI Test Automation in Jira, and Why You Probably Need Both. The short answer: they solve different problems and work best when combined.
Every QA team using Xray knows the pattern: as the sprint nears completion, the open test execution fills up with untouched manual tests, testers are stretched thin, and feedback to developers slows to a crawl. This is the "Manual Testing Debt."
The usual responses, hiring more testers or rushing to automate new features, rarely work within a single sprint. Traditional test automation simply takes too long to develop alongside the feature it's meant to validate.
This is where GUI AI agents change the game.
GUI AI agents interact with software by visually interpreting screen interfaces and operating a virtual mouse and keyboard. Instead of relying on backend APIs, these AI agents execute complex digital workflows exactly as a human user would:
The agent operates in a continuous feedback loop that combines reasoning with action:
This means the agent doesn't blindly follow a script; it reacts to the application's actual behavior.
While general-purpose AI agents are appearing in consumer chatbots, Lynqa is purpose-built for the testing ecosystem. By integrating directly into Xray, it transforms how teams handle manual workloads within a sprint.
How it speeds up the cycle:
To see this in action end-to-end, from writing a User Story all the way through to automated execution via Lynqa, I walked through a complete example connecting Rovo, Xray, and Lynqa inside a single Jira environment in From User Story to Automated Test Execution: A Virtuous AI Cycle in Jira.
For QA teams still running manual test cycles, agentic AI does not change what you test. It changes who, or rather what, executes them. The practical benefits are immediate and do not require any prior investment in automation infrastructure.
You can start running automated executions on day one. If your test case is written in natural language or Gherkin, Lynqa can execute it against the live application immediately: no scripts, no locators, no code changes, and no waiting for an automation engineer to implement the test before it can run. This is particularly valuable during a sprint, when new features are being validated in parallel with development and the test suite is still evolving.
The agent handles UI changes so you don’t have to. Because the agent perceives the interface visually rather than relying on the underlying element identifiers that traditional automation scripts depend on, it adapts naturally when a button is repositioned, a label is renamed, or a page layout shifts. This removes the maintenance burden that makes traditional automation so expensive to sustain alongside an actively developed product.
Execution results are explainable, not just binary. Rather than returning a simple pass or fail, the agent produces a step-by-step execution report with screenshots, a verdict for each step, and a written rationale explaining why a step failed or was only partially validated. With transparent execution logs and detailed analysis for each verdict, therefore the QA tester stays in control.
Taken together, these properties reframe what is possible for a functional QA team. The sprint no longer ends with a backlog of manual checks that testers race through before the release window closes. The entire test suite can run in parallel, in Xray, using the test cases the team already wrote, and the results feed directly back into the same tool. The testers are not replaced; they are freed from repetitive execution to focus on test design, exploratory work, and the higher-value judgment calls that only a human can make.
Have you tried running your Xray test cases with an AI execution agent? I would love to hear how your team is approaching this in the comments.
Bruno Legeard _Lynqa_
0 comments