Forums

Articles
Create
cancel
Showing results for 
Search instead for 
Did you mean: 

Why You Keep Explaining the Same Delays Every Sprint?

Why is this taking so long?

You've heard this question in standups. In Slack. In retros. Every sprint, someone asks. Every sprint, the answer is vague—_"it got stuck in review," "we hit some blockers," "it took longer than expected."_

So you start tracking Time in Status. Now you have data—3 days in Review, 2 days in QA. But the question keeps coming back. Because knowing **how long** isn't the same as knowing **why**.

Most teams collect this data and do nothing with it. Or worse—they use it in ways that hurt morale without improving delivery. Here are the five most common mistakes, and what to do instead.

 



Mistake #1: Measuring Without Context

You see an issue spending 4 days in “In Progress.” Is that bad? You have no idea. Without knowing what’s normal for similar work, the number is meaningless.

What teams do wrong:

  • Set arbitrary thresholds (“anything over 2 days is a problem”)
  • Compare all issues equally (a bug fix vs. a story vs. a sub-task)
  • React to outliers without understanding why

What to do instead:

Compare cycle time to similar completed work. A 4-day story might be perfectly healthy if your team’s stories typically take 3-5 days. But if similar stories usually take 1 day, that’s a red flag.

Time-In-Status-SummaryCard.png

The Summary card compares each issue to your team's actual history for the same issue type. Color-coded health tells you at a glance whether something needs attention.

 


 

Mistake #2: Treating Symptoms, Not Root Causes

You notice issues pile up in “Code Review” every sprint. So you discuss the issue and agree to try “reviewing faster.” Next sprint, same problem.

What teams do wrong:

  • Focus on individual issues instead of patterns
  • Push people to work faster instead of fixing the process
  • Have the same retrospective conversation every two weeks

What to do instead:

Look for patterns across sprints. Is the bottleneck consistent? Does it happen with certain issue types? Are certain team members being overloaded?

The problem usually isn’t people—it’s the process. Maybe reviews pile up because:

  • No dedicated review time is scheduled
  • One person is the bottleneck for all reviews of certain type
  • Stories are too large to review quickly

 

CFD-interpreting-sprint-data-sp15.png

Flow Intelligence surfaces patterns across your team—trends, predictability, cumulative flow. Stop asking “why is THIS issue late?” and start asking “why does this KEEP happening?”

 


 

Mistake #3: Reporting on the Past Instead of Predicting the Future

Time in Status tells you what already happened. By the time you see a problem, it’s too late. The sprint is over. The stories didn't complete.

What teams do wrong:

  • Only look at metrics in retrospectives (after the damage is done)
  • Can’t answer “when will this ship?” with confidence
  • Constantly surprised by delays

What to do instead:

Use historical data to predict future delivery. If your team’s stories typically take 3-5 days, and this one started 2 days ago, you can forecast when it’s likely to finish.

Time-In-Status-ForecastCard.png

The Forecast card shows three scenarios based on your team’s actual completion patterns:

  • Likely — Half of similar work finished in this time or less
  • Plan for — 85% of similar work finished in this time or less
  • Worst case — 95% of similar work finished in this time or less

When someone asks, “When will this be done?”—you have an answer based on data, not gut feel.

 


 

Mistake #4: Reports That Don't Lead to Action

A report that sits in a dashboard doesn't improve anything. The value isn't in the chart—it's in knowing what to do next.

Your Cycle Time went up 20% this sprint. Your throughput dropped. The cumulative flow diagram shows a pattern forming. What's causing it? What should you try first?

What teams do wrong:

  • Collect metrics but never act on them
  • Spend retrospectives debating what the data means
  • Make changes based on guesses, then don't know if they helped

What to do instead:
Get guidance alongside the data—so you can move from insight to action faster.

What-to-expect-4-FlowIntelligence-insights.png

Noesis helps you interpret what you're seeing and suggests what to try next. Ask it anything:

- "What does this cumulative flow diagram tell me?"
- "Why is our cycle time increasing?"
- "Work keeps piling up in QA—what should we try?"

You learn to read the patterns while solving real problems. The next time you see a similar issue, you'll spot it yourself.

 


 

Mistake #5: Using Time in Status to Measure People Instead of Process

Time in Status becomes a surveillance tool. "Why did YOUR tickets take so long?" Developers get defensive. Trust erodes. People start gaming the metrics.

What teams do wrong:

  • Use time data in performance reviews
  • Call out individuals in standups
  • Create pressure without providing support

What to do instead:

Debug your process rather than eroding trust. The best coaches don't track who's slow—they remove what's in the way and make everyone faster.

When work takes too long, it's usually systemic: too much work in progress, dependencies, or overload in review and test.

The question isn't "who's slow?" It's "what's blocking flow?" Maybe one person is the bottleneck for reviewing a certain part of the codebase. Maybe no one can step in when testing backs up. These are process problems with process solutions.

 


Don't Repeat Yourself

Measuring Time in Status is useful. But it's just a starting point. The real goal isn't tracking how long things take. It's:

  • Spotting patterns before they derail yet another sprint
  • Understanding why delays happen so you can fix them once
  • Having better conversations about process, not blame

Fix the root cause. Stop explaining the same delays. Move on.

0 comments

Comment

Log in or Sign up to comment
TAGS
AUG Leaders

Atlassian Community Events