Forums

Articles
Create
cancel
Showing results forΒ 
Search instead forΒ 
Did you mean:Β 

πŸ†•Time Metric Trend Gadget is live β€” see your Jira workflow trend week by week

Frame 1321314412.png

A single metric value tells you where you are. A trend tells you where you're heading β€” and what to do about it.

Why one number is never the full story

Most teams check a workflow metric the same way: open a report, look at the current Cycle Time or Code Review time, nod, and move on. The number looks fine. Or it doesn't. Either way, the conversation usually ends there.

The problem is that a single value hides almost everything useful. A Cycle Time of 6 days might mean your team is stable at 6 days. Or it might mean you were at 3 days a month ago and things are quietly getting worse. Or that you had two outliers last week that dragged the average up. You can't tell the difference from one snapshot.

Trend visibility is what actually helps you react early β€” before a slowdown becomes a quarterly retrospective topic.

That's the gap the new Time Metric Trend Gadget is designed to close.

What the Time Metric Trend Gadget does

The Time Metric Trend Gadget is a Jira dashboard gadget inside Time Metrics Tracker. It shows how one selected time metric changes over time β€” week by week, or across whatever buckets you configure.

You pick a metric (Cycle Time, Code Review time, QA / Validation time, or any other time metric you've set up), choose the scope, and the gadget plots the trend. It sits on your Jira dashboard and gives you a live view of whether a process is getting faster, slower, or less predictable.

image 155.png

You add it the same way as any other gadget: Jira Dashboard β†’ Add gadget β†’ Time Metrics Trend.

What teams actually use it for

A few practical scenarios where the trend view earns its place on a dashboard:

  • Spotting slowdowns early. If Code Review time has crept from 1.5 days to 3 days over six weeks, the trend line shows it long before anyone complains in a retro.
  • Confirming improvements are real. After changing a process β€” say, splitting QA into two stages β€” you want to see whether Cycle Time actually improved or just moved the problem.
  • Separating outliers from shifts. A bad week caused by two stuck tickets looks very different from a steady upward drift. The chart makes that distinction visible.
  • Drilling from "something's off" to "this ticket is why." When a bucket looks bad, you can click into it and see the specific work items behind the number.

For day-to-day dashboard monitoring, this is the kind of widget you glance at in the morning and act on by mid-week.

Configuring the scope

Before the chart is useful, it needs to look at the right slice of work. The gadget lets you narrow the scope by Project, Board, or Time metric, and then filter further by Issue Type, Status, Assignee, Sprint, or Label.

You also control the time range and buckets (for example, 12 weeks grouped weekly), and the format used to display durations. One thing worth knowing: changing the format only affects how numbers are displayed. Calculations are always based on working time, regardless of format.

Group 6273237.png

How to read the chart

Each point on the chart represents one time bucket. If your range is 12 weeks and the grouping is weekly, you'll see 12 points β€” one per week.

The X-axis shows dates or date ranges by default. If you turn on previous period comparison, it switches to bucket or week numbers so the two periods can sit side by side.

The Y-axis is where you choose what "duration" actually means. You have three options, and they answer different questions:

Metric

What it tells you

When to use it

Median

A typical work item β€” half finish faster, half slower

Your default. Best signal of normal flow.

P85

How long 85% of items finish within

Planning and SLA-style conversations.

P95

The slowest long-tail cases

Investigating outliers and worst-case risk.

KPI cards

Above the chart, the KPI cards summarise the period at a glance:

  • Median β€” the typical duration across the range
  • P85 β€” the 85th percentile across the range
  • Work Items β€” how many work items are in the dataset
  • Trend β€” overall direction, shown as Improving, Stable, or Worsening

The Trend card is the fastest way to get a read on what's happening without reading the chart in detail.

Group 6273238.png

Previous period comparison

If you enable previous period comparison, the gadget overlays the current period against the previous period of the same length, shifted back in time. So a 12-week view compares the last 12 weeks against the 12 weeks before that. This is the cleanest way to answer "are we actually better than last quarter?" instead of relying on memory.

Group 6273239.png

Warning and Critical lines

If your selected time metric has thresholds configured, the chart displays Warning and Critical lines. Points that cross these lines are easy to spot, and you can review which specific items breached each level. If no thresholds are configured for the metric, the lines simply don't appear.

Group 6273240.png

Drill-down: from trend to specific work items

A trend chart is only useful if you can go from "this week looks bad" to "here's why." Clicking any point on the chart opens a detail modal with the work items in that bucket, showing:

work item, summary, assignee, type, metric, status, completed

You can also review which items are above the warning line and which are above the critical line. This is where the gadget shifts from monitoring to investigation β€” you see not only what got worse, but exactly which items caused it.

Group 6273241.png

A small example

Say you're tracking Code Review time, 12-week range, grouped weekly, Y-axis set to Median:

  • Weeks 1–8: median sits around 1 day
  • Weeks 9–10: jumps to 2.5 days
  • Weeks 11–12: settles at 2 days

The Trend card reads Worsening. Before assuming the team slowed down, you click week 9 and see three reviews that stretched past 5 days β€” all assigned to someone who was out that week. That's not a process shift; it's a coverage gap. Switching the Y-axis to P95 confirms it: the long tail moved, but the typical review didn't. Different diagnosis, different fix.

A few things to keep in mind

Trends are easy to misread if you forget these:

  • The latest bucket may look smaller because it's incomplete. A partial week doesn't have all its data yet, so don't panic about a sudden drop at the right edge of the chart.
  • Median jumps more with small samples. If a bucket only has four or five items, one unusual ticket can shift the median noticeably.
  • P95 is noisier by nature. It depends on the slowest items in each bucket, so expect more movement there than in Median.

Treat the chart the way you'd treat any time series β€” look at the shape across several buckets, not at any single point.

Summary

A single metric value tells you the current state. The Time Metric Trend Gadget shows you the direction β€” whether things are improving, stable, or getting worse β€” and lets you drill down to the specific items driving the change. It's meant to live on your Jira dashboard and give you a quick, honest read on workflow health every day.

A good way to start: add the gadget to a dashboard, pick one metric that matters to your team (Cycle Time is a safe first choice), and set the Y-axis to Median. Once you're comfortable reading the trend, switch to P85 or P95 to dig into variability or outliers.

Try Time Metrics Tracker | Time Between Statuses  

 

0 comments

Comment

Log in or Sign up to comment
TAGS
AUG Leaders

Atlassian Community Events