When did your delivery start slowing down?
For many Jira teams, this is a surprisingly hard question to answer. A sprint slips. A release takes longer than expected. A few issues suddenly stay “In Progress” far beyond what feels normal. But when you look at your basic metrics, nothing stands out. The main number looks stable. There was no clear moment when things “broke.”
The reality: delivery rarely fails all at once. It drifts.
That is why looking at one average value is not enough. If you want to catch delivery issues earlier, understand where risk is building up, and make more reliable plans, you need to look at how your workflow changes over time—and how your work is distributed.
This is where Median, P85, and P95 become extremely useful. They help you answer a better question: Is our delivery actually stable, or are we only looking at a number that hides the problem?
Most teams begin by tracking a single value: Average Cycle Time.
At first, it works. You get a general sense of how long work takes. You can compare one month to another and report progress to stakeholders. But over time, a gap often appears between what the metric says and what the team actually experiences. The average may look stable, while delivery feels less predictable.
Why? Because delivery is not defined by one number; it is defined by the distribution of work.
Some items move quickly, while others get stuck in QA, Review, or external dependencies. When all of that gets compressed into one average, the important signals can disappear. To understand what is really happening, you need to read the “Big Three” together.
The Median shows the middle point of your data. If your Median Cycle Time is 3 days, it means half of your work items were completed in 3 days or less, and half took longer.
P85 shows the time within which 85% of your work items were completed.
P95 shows what is happening with the slowest 5% of work.
|
Metric Pattern |
What It Means |
Likely Causes & Action |
|
Median, P85, and P95 are close together |
🎉 Stable Delivery. Your process is "the gold standard" for predictability. |
Work items are sized consistently and flow through the system without friction. |
|
Median stable, but P85 is increasing |
👀 Hidden Instability. Typical work looks fine, but more items are taking longer. |
Often the first sign of delivery drift. Caused by review delays, QA bottlenecks, or frequent priority switching. |
|
Median and P85 are both increasing |
🚨 Systemic Slowdown. The whole workflow is slowing down, not just outliers. |
Usually points to too much WIP (Work in Progress), an overloaded team, or a lack of clear requirements. |
|
Median stable, but P95 is spiking |
⚡ Long-tail Risk. Most work is normal, but a few items are getting seriously delayed. |
Signal of major outliers: complex bugs, blocked tickets, or heavy reliance on external dependencies. |
|
All values jump up and down frequently |
🌪️ Workflow Volatility. Your delivery is unpredictable. |
Caused by unstable priorities, changing scope mid-sprint, or highly inconsistent task sizes. |
Key Takeaway: Consistency is often more important than speed. A team that reliably delivers in 5 days is easier to plan around than a team that oscillates between 2 and 12 days.
To track these signals without exporting data to spreadsheets, you can use the Time Metric Trend Gadget in Time Metrics Tracker | Time Between Statuses.
It allows you to monitor Cycle Time, Code Review time, or QA time directly on your Jira dashboard with:
Most teams want to be faster, but the better goal is to be stable. Fast but unpredictable delivery creates surprises; stable delivery creates trust.
By tracking Median, P85, and P95 as a trend, you stop looking at delivery as a single number. You start seeing the system.
Try it free on the Atlassian Marketplace → Time Metrics Tracker | Time Between Statuses
Anastasiia Maliei SaaSJet
0 comments