Why is this taking so long?
You've heard this question in standups. In Slack. In retros. Every sprint, someone asks. Every sprint, the answer is vague—_"it got stuck in review," "we hit some blockers," "it took longer than expected."_
So you start tracking Time in Status. Now you have data—3 days in Review, 2 days in QA. But the question keeps coming back. Because knowing **how long** isn't the same as knowing **why**.
Most teams collect this data and do nothing with it. Or worse—they use it in ways that hurt morale without improving delivery. Here are the five most common mistakes, and what to do instead.
You see an issue spending 4 days in “In Progress.” Is that bad? You have no idea. Without knowing what’s normal for similar work, the number is meaningless.
What teams do wrong:
What to do instead:
Compare cycle time to similar completed work. A 4-day story might be perfectly healthy if your team’s stories typically take 3-5 days. But if similar stories usually take 1 day, that’s a red flag.
The Summary card compares each issue to your team's actual history for the same issue type. Color-coded health tells you at a glance whether something needs attention.
You notice issues pile up in “Code Review” every sprint. So you discuss the issue and agree to try “reviewing faster.” Next sprint, same problem.
What teams do wrong:
What to do instead:
Look for patterns across sprints. Is the bottleneck consistent? Does it happen with certain issue types? Are certain team members being overloaded?
The problem usually isn’t people—it’s the process. Maybe reviews pile up because:
Flow Intelligence surfaces patterns across your team—trends, predictability, cumulative flow. Stop asking “why is THIS issue late?” and start asking “why does this KEEP happening?”
Time in Status tells you what already happened. By the time you see a problem, it’s too late. The sprint is over. The stories didn't complete.
What teams do wrong:
What to do instead:
Use historical data to predict future delivery. If your team’s stories typically take 3-5 days, and this one started 2 days ago, you can forecast when it’s likely to finish.
The Forecast card shows three scenarios based on your team’s actual completion patterns:
When someone asks, “When will this be done?”—you have an answer based on data, not gut feel.
A report that sits in a dashboard doesn't improve anything. The value isn't in the chart—it's in knowing what to do next.
Your Cycle Time went up 20% this sprint. Your throughput dropped. The cumulative flow diagram shows a pattern forming. What's causing it? What should you try first?
What teams do wrong:
What to do instead:
Get guidance alongside the data—so you can move from insight to action faster.
Noesis helps you interpret what you're seeing and suggests what to try next. Ask it anything:
- "What does this cumulative flow diagram tell me?"
- "Why is our cycle time increasing?"
- "Work keeps piling up in QA—what should we try?"
You learn to read the patterns while solving real problems. The next time you see a similar issue, you'll spot it yourself.
Time in Status becomes a surveillance tool. "Why did YOUR tickets take so long?" Developers get defensive. Trust erodes. People start gaming the metrics.
What teams do wrong:
What to do instead:
Debug your process rather than eroding trust. The best coaches don't track who's slow—they remove what's in the way and make everyone faster.
When work takes too long, it's usually systemic: too much work in progress, dependencies, or overload in review and test.
The question isn't "who's slow?" It's "what's blocking flow?" Maybe one person is the bottleneck for reviewing a certain part of the codebase. Maybe no one can step in when testing backs up. These are process problems with process solutions.
Measuring Time in Status is useful. But it's just a starting point. The real goal isn't tracking how long things take. It's:
Fix the root cause. Stop explaining the same delays. Move on.
Björn Brynjar - Smart Guess
Helping teams making impact
Smart Guess ehf.
Iceland
0 comments