Most teams don’t suddenly “become slow.”
More often, delivery performance changes quietly. One week QA takes a little longer. Then a few stories miss the release window. Then the team starts hearing the classic sentence:
“It feels like everything is stuck in testing.”
Feels like is a dangerous metric.
It creates opinions, side quests, and occasionally a very emotional Jira board review.
A better question is:
Which week changed — and where exactly did the workflow slow down?
That’s where Trend Gadget helps. It shows how a selected time metric changes over time, so teams can track metrics such as Cycle Time, Code Review time, QA / Validation time, or any other configured time metric directly on a Jira dashboard.
Let’s turn it into a simple 15-minute bottleneck check for Engineering Managers.
Here’s a very normal delivery mystery.
The team finishes development on several stories. Code review looks okay. Nothing is officially blocked. But by the end of the week, too many work items are still sitting in QA.
Developers say:
“Everything was ready for testing.”
QA says:
“Everything arrived at once.”
Product says:
“So… are we shipping or not?”
And the Engineering Manager needs to understand whether QA really became slower, or whether the team created a sprint-end traffic jam and politely called it “validation.”
Trend Gadget is useful here because it helps teams see not only that something got worse, but also which work items caused the change.
Before opening the chart, make sure the team has the right metric to trend.
Trend Gadget does not magically guess where the bottleneck is. It visualizes a selected time metric over time. So first, define the metric that answers your delivery question.
“How much time do work items spend in QA / Validation?”
That gives you a clear metric to track in the trend.
Create or select a time metric such as:
QA / Validation time
This metric should measure how long work items stay in the QA part of the workflow.
For example:
Start status: QA
End status: Done
Or, depending on the workflow:
Start status: Ready for QA
End status: QA Passed / Done
The exact statuses depend on the team’s Jira workflow. The important thing is that the metric captures the time between the moment work becomes ready for testing and the moment validation is completed.
Once the metric is ready, add it to Trend Gadget.
Use Trend Gadget with:
The goal is not to build the world’s most perfect dashboard.
The goal is to answer one question before the next meeting becomes a courtroom drama.
Once the gadget is configured, check the trend.
You’re looking for a visible change, such as:
This is the first useful signal.
If QA time was stable for several weeks and then jumped in one specific week, the issue probably isn’t “QA is always slow.”
It is more likely:
“Something about that week changed.”
That is a much better problem to investigate.
It is smaller. It is kinder. It is also harder to argue with.
A trend line is good.
A trend line with the actual work items behind it is better.
Trend Gadget is designed to help teams understand bottlenecks more clearly and identify which work items contributed to the change.
So when one week looks unusually high, open the details behind that point.
Now the conversation moves from:
“QA felt overloaded.”
to:
“These specific work items spent more time in QA / Validation during that week.”
That shift matters.
Because a bottleneck is rarely solved by guessing harder.
When QA / Validation time increases, don’t immediately assume testing itself became slower.
Ask:
For example:
For example:
This distinction is the whole point.
If QA time increased because testing work was genuinely more complex, the team may need better planning or risk flags.
If QA time increased because everything arrived at once, the problem is batching.
And batching is the workflow equivalent of everyone trying to board the same elevator with a bicycle.
QA / Validation time is one part of the delivery journey.
The next question is:
“Did this QA slowdown affect overall delivery performance?”
If QA time increased and Cycle Time increased in the same week, you probably found a delivery bottleneck.
If QA time increased but Cycle Time stayed stable, the team may have absorbed the delay somewhere else.
That does not mean you ignore it. It means you understand the impact correctly.
A useful bottleneck check does not just say:
“This metric is high.”
It says:
“This metric changed, and here is what it did to delivery.”
That is the difference between dashboard decoration and actual operational insight.
Use this quick diagnosis in retrospectives or weekly delivery reviews.
What you see:
Likely cause:
The team pushed too much work into QA at once.
Fix ideas:
This is not “QA is slow.”
This is “the workflow created a queue and then acted surprised.”
What you see:
Likely cause:
The team is discovering quality or requirement issues too late.
Fix ideas:
This is not a testing problem.
This is a “we found the question after building the answer” problem.
What you see:
Likely cause:
A large, complex, or risky work item skewed the weekly metric.
Fix ideas:
Sometimes the workflow is not broken.
Sometimes one item just walked in wearing a backpack full of complexity.
The point of a 15-minute bottleneck check is not to redesign the entire workflow before lunch.
Pick one improvement.
For example:
Then keep Trend Gadget on the Jira dashboard and check the same metric next week.
The question becomes simple:
“Did QA / Validation time improve after the change?”
That is how process improvement becomes measurable instead of motivational.
Trend Gadget is great for spotting process changes.
It is not great as a leaderboard of who “made the metric bad.”
If QA / Validation time increased, the healthy reaction is not:
“Who caused this?”
It is:
“What changed in the system?”
Metrics should work like a smoke detector.
If it goes off, you don’t blame the smoke detector. You find the fire.
Here is the full check:
That’s it.
No spreadsheet archaeology.
No “I think maybe.”
No meeting where everyone debates feelings with Jira open in another tab.
Just a focused check that answers:
“Where did delivery performance change, and what should we inspect first?”
A delivery slowdown is much easier to fix when the team can point to the moment it changed.
With Trend Gadget, an Engineering Manager can quickly see whether QA / Validation time increased, identify the week where the trend shifted, and open the work items that explain the change.
The real value is not the chart itself.
The real value is the conversation it creates:
“QA slowed down this week because work arrived in one batch, two items bounced back, and one large story waited for clarification.”
That is specific enough to fix.
And in workflow improvement, specific beats are dramatic every time.
Try Time Metrics Tracker by SaaSJet on the Atlassian Marketplace or book a demo to see how the Trend Gadget uncovers bottlenecks in your process.
Anastasiia Maliei SaaSJet
0 comments