Cycle Time is often the first metric teams check when delivery starts to slow down.
It answers an important question:
How long does it take work to move through our process?
But once you see that Cycle Time is growing, the next question is usually harder:
Where is that time actually spent?
Is work waiting in Review?
Is QA overloaded?
Are approvals slowing everything down?
Are support tickets spending too much time in Waiting for customer?
Or is one custom workflow status quietly consuming most of the time?
That is the difference between detecting a problem and diagnosing it.
Trend tells you when.
It helps you notice that your delivery metric has changed over time. If you want to explore this part in more detail, check out our previous article about the Trend Gadget.
Status Contribution tells you where.
It helps you understand which workflow status consumed most of the tracked time and where you should investigate first.
Cycle Time gives you a signal.
If your average Cycle Time increases from 4 days to 8 days, something has changed. The team is still completing work, but it now takes longer to move from start to finish.
That is useful for standups, retrospectives, delivery reviews, and leadership dashboards.
But Cycle Time alone does not explain the shape of the delay.
Two teams can have the same Cycle Time and completely different problems.
One team may spend most of its time in In Progress.
Another team may spend most of its time in Code Review.
A support team may spend most of its time in Waiting for customer.
A service team may lose time in Approval.
The number may be the same.
The action will not be.
Cycle Time becomes less actionable when it stays at the total level.
You can see that work is slow, but you still need to understand:
Without that breakdown, teams often jump to broad conclusions:
“We need to move faster.”
“QA is probably the problem.”
“Review feels slow.”
“Approvals take too long.”
Sometimes these assumptions are right. Sometimes they are not.
A status contribution view helps replace guesses with evidence.
The Status Contribution Chart Gadget in Time Metrics Tracker helps you see how much each workflow status contributes to the selected time metric.
Instead of looking only at the final Cycle Time value, you can see a ranked chart of statuses and understand where the tracked time goes.
For example, you may discover that:
This is where the conversation changes.
Instead of saying:
“Our Cycle Time is too high.”
You can say:
“Review is consuming the largest share of our tracked time. Let’s open the work items behind it and check what caused the delay.”
That is the path from detection to diagnosis.
Cycle Time is a common example, but it is not the only metric you can analyze.
With the Status Contribution Chart Gadget, you can choose any time metric created for your workflow.
That means you can analyze status contribution for metrics such as:
This matters because not every team defines delivery in the same way.
A development team may want to understand how much time goes into Dev, Review, and QA.
A support team may care about Investigation versus Waiting for customer.
A service team may need to compare Approval, Processing, and Done.
A cross-functional team may want to measure handoffs between departments.
The metric defines which work items are included and which statuses are compared, so the chart can match the workflow you actually use.
Choose the Scope: Project or Board. Use Project for broad analysis, and Board to analyze a specific team's process.
Select the time metric that matches your question.
Use filters to narrow the analysis. You can filter by issue type, assignee, sprint, or label.
Set a date range. A short period helps understand what is happening now; a long period helps identify systemic issues.
Choose the duration format (hours, business days, calendar days) that is most convenient for your audience.
The gadget shows a ranked list of statuses. The status with the highest contribution appears at the top.
Each row helps you quickly understand:
This gives you a simple answer to a practical question:
Which workflow status consumes the most time?
Before you investigate individual work items, you can already see where the biggest part of your tracked time is going.
A useful part of this analysis is switching between Total and Average.
Total shows the cumulative time spent in each status across all included work items.
Use it when you want to understand the biggest team-wide impact.
For example:
“Review consumed 40% of all tracked time this month.”
Average shows how long a typical work item spends in each status.
Use it when you want to understand what usually happens to a single work item.
For example:
“On average, each work item spends 2.5 days in QA.”
These two views can tell different stories.
A high Total can mean a few large items inflated the number.
A high Average can mean many items repeatedly get stuck in the same place.
That difference is important. It helps you decide whether you are looking at an outlier or a recurring workflow bottleneck.
In the Status Contribution Chart Gadget, you can click any status bar and open a drill-down view.
There, you can see the work items where this status was the main delay. For each item, you can review details such as the work item key, summary, time in status, and total time on the selected metric.
This helps you move from:
“QA takes too long.”
To:
“These are the exact work items that spent most of their time in QA. Let’s check whether the pattern is missing acceptance criteria, test environment delays, unclear ownership, or large batch size.”
That makes retrospectives much more specific.
You are not discussing a chart in isolation.
You are discussing the work behind the chart.
If Review is the top status, work may be waiting for available reviewers, clear ownership, or smaller pull requests.
Possible next questions:
If QA consumes the most time, the team may need to look at test readiness, environment stability, acceptance criteria, or QA capacity.
Possible next questions:
If Waiting for customer, Waiting for vendor, or Waiting for approval dominates the chart, the delay may be external or process-designed.
That is not always a problem, but it should be visible.
Possible next questions:
If Approval takes the largest share, the issue may be outside the delivery team’s direct control.
Possible next questions:
The Status Contribution Chart Gadget is especially useful when you want to answer questions like:
It is also useful for dashboard-based conversations because everyone can look at the same view: the selected scope, metric, filters, date range, and ranked status contribution.
Start broad, then narrow down. First check the whole project or board, then use filters to investigate a specific work item type, sprint, assignee, or label.
Always compare Total and Average. Total shows the biggest overall impact. Average shows what usually happens to a single work item.
Do not treat the top status as a problem automatically. Sometimes a status dominates because it is an expected waiting stage, such as Waiting for customer or Approval.
Use the drill-down before making decisions. The chart shows where to look, but the work items explain what actually happened.
Review the selected time metric if the chart looks unexpected. A dominant status may mean your workflow has changed or the metric should be adjusted.
Bring real examples into retrospectives. The most useful discussion starts when you connect the top status with the actual work items behind it.
Cycle Time helps you notice that delivery is taking longer.
But to improve the workflow, you need to understand where that time is spent.
The Status Contribution Chart Gadget helps you see which workflow status consumes the most time, compare total and average impact, and drill down into the work items behind the delay.
That makes it easier to identify Review, QA, Waiting, Approval, or custom workflow bottlenecks faster — and turn your Jira dashboard into a practical tool for process improvement.
Instead of asking only:
“How long did it take?”
You can ask:
“Where did the time go, and what should we improve first?”
Anastasiia Maliei SaaSJet
0 comments