Forums

Articles
Create
cancel
Showing results for 
Search instead for 
Did you mean: 

How to Instantly See What Changed in Your Sprint: Added, Removed, Carryover (A Few Clicks)

Picture the Sprint Retro.

Someone asks: “So… why didn’t we finish what we committed to?”

Someone else replies: “Because things happened.”

And then the room goes quiet while everyone collectively tries to remember which issues were:

  • added mid-sprint
  • removed mid-sprint
  • carried over to the next sprint
  • moved between assignees (aka “workload musical chairs”)

In Jira, this is where the ritual begins: exporting, filtering, squinting at history tabs, and politely arguing over what “scope change” really means.

ChatGPT Image Feb 5, 2026, 06_17_56 PM.png

The awkward truth: you can’t improve what you can’t trace

Sprint metrics are only useful when you can answer the follow-up question:

“Cool number. What issues created that number?”

Because “Scope change: +18%” is interesting… but it’s not actionable until you can say:

  • what was added
  • what was removed
  • what rolled over
  • who absorbed the change
  • and whether that change was smart, necessary, or just scope creep wearing a trench coat

And yes—this matters even more when stakeholders show up with Questions.

What Jira shows (and what it doesn’t)

Jira’s native Sprint Report does give you one very helpful hint:

  • Issues added after sprint start are marked with an asterisk (*) in the Sprint Report list.

That asterisk is basically Jira’s way of saying: “This wasn’t in the original plan. Don’t blame the burndown.”

But here’s the catch:

✅ Visible in Jira Sprint Report: Added items (kind of)

You can spot added items (via *).

❌ Not easily visible as a list: Removed items & carryover items

Teams constantly ask the community how to report “what was added mid-sprint” or track scope changes cleanly—because it’s not straightforward to extract a reliable list without workarounds (labels, manual tracking, copying keys into JQL, etc.).

So Jira gives you a signal… but not the receipts. And in retros, the receipts are everything.

Why this gap hurts more than we admit

When you can’t clearly trace added/removed/carryover, sprint reviews turn into debates like:

  • “Was this actually added mid-sprint?”
  • “Did we remove it—or did it just… vanish?”
  • “Why is our velocity down?”
  • “Who picked up this unplanned work?”
  • “Why did carryover spike—was it blocked, oversized, or just optimistic planning?”

Without a clear work-item-level breakdown, you end up optimizing for storytelling, not truth.

And the sprint becomes less “inspect & adapt” and more “guess & defend.”

The better way: Sprint Report in Time in Status (by SaaSJet)

Time in Status includes a Sprint Performance Report built specifically to make sprint analysis more complete than Jira’s native view.

unnamed (18).png

It breaks your sprint down across the core dimensions teams actually need in retros and planning:

  • Sprint info & context
  • Team velocity (committed vs completed) - for completed sprints.
  • Burndown chart - for active sprint.
  • Workload by assignee.
  • Completion rate.
  • Committed vs completed by priority.
  • Scope change (added vs removed).
  • Carryover (for completed sprints).

It also respects how your board estimates work—Story Points, Original Time, or Work Item Count—so teams aren’t forced into a reporting model they don’t use.

So far, so good. But here’s what changed the game:

New feature: Sprint metric details (a.k.a. “Show me the issues behind the number”)

We’ve released Sprint metrics details: a “View data table” option that opens a detailed table behind each metric card.

image-20260204-161248.png

In human terms, it means: You’re no longer stuck with metrics as headlines. You can open the full article.

What you get from metric data tables

When you open a metric’s table, you can quickly see:

  • Which work items were included in that metric’s calculation
  • Their estimation values (matching your board’s estimation method)
  • Sort, scan, and validate the list right inside the report

For Committed and Completed, you also get a Total row that sums the estimation column—so you can reconcile the headline number with the underlying issues in seconds. 

Group 13 (1).png

Why this is huge: it closes Jira’s “scope visibility” gap

Now, instead of saying:

  • “Looks like we had scope change…”

…you can say:

  • “These 9 issues were added, these 4 were removed, and these 6 rolled into the next sprint—here they are.”

That enables the kind of retro conversations that actually improve planning:

1) Added work: not just “how much,” but “what”

Stop guessing whether the sprint derailed because of “interruptions.”
Open the table and name them.

2) Removed work: stop losing context

Removed issues often disappear from the narrative, and later come back as “why is this still not done?” With the list visible, you can capture the real reason:

  • Deprioritized
  • blocked
  • Underestimated
  • “we panicked and cut scope” (valid, but say it with data)

3) Carryover: make it a pattern you can fix

Carryover is one of those numbers that causes immediate emotions. But the real value is being able to answer:

  • Which issues consistently carry over?
  • Are they large, blocked, unclear, or constantly interrupted?

Time in Status defines carryover as incomplete work moved to the next sprint (for completed sprints), and now you can trace it item-by-item. 

4) Workload: who absorbed the change?

Workload is where scope change becomes personal.

A sprint can look “fine” at the team level and still be chaotic for one person who got handed every mid-sprint fire.

The Workload section is explicitly built around committed/added/removed per assignee.
With metric details, you can see the actual issues that created that workload—so you can redistribute smarter next sprint.

Teams don’t need more charts — they need fewer arguments

Sprint metrics aren’t supposed to be decorative.

They’re supposed to help you:

  • plan based on reality (not hope)
  • prevent scope creep from becoming normalized
  • protect focus time
  • build stakeholder trust (“here’s what changed, and why”)
  • run retros that lead to real adjustments

And that only happens when sprint reporting includes traceability, not just totals.

a9c5b508-620a-40a2-8262-8dc06b916fb7.png

Want to try it?

If sprint reporting in your team currently involves phrases like:

  • “Wait, when was that added?”
  • “I swear that wasn’t in scope…”
  • “Who moved this to next sprint?”
  • “Let’s check the issue history… again…”

…then it’s probably time to switch from asterisk archaeology to actual visibility.

You can try Time in Status by SaaSJet (trial via Atlassian Marketplace), or book a demo call to see the Sprint Report + metric tables in your own workflow.




0 comments

Comment

Log in or Sign up to comment
TAGS
AUG Leaders

Atlassian Community Events