Showing results for 
Search instead for 
Did you mean: 
Sign up Log in

It's not the same without you

Join the community to find out what other Atlassian users are discussing, debating and creating.

Atlassian Community Hero Image Collage

What is the best practice for capturing debugging tasks in Jira that have to be done after release?

Our primary goal right now is to make sure we are capturing the time spent on each story, and making sure each story is bite-sized enough that it can be completed in one sprint. This does cause problems when we want to keep a ticket in a specific sprint, but we need to QA after the change is released.

What is the best practice for handling this? Should we close the project in the sprint that the work is done and split out to a new ticket for QA and debugging? Should we leave the sprint open until QA is done and split out only if new changes are needed?

This is mainly an issue for our approval process. The only way we can close the ticket and move on to the split ticket, is by approving incomplete work. My hesitation in closing the sprint with incomplete tickets, is we will be unable to count that work towards that sprint.

Any help will be much appreciated!

1 answer

1 accepted

1 vote
Answer accepted

Hi @Chris deMontmollin 

There looks to be a few possible scenarios above:

  • QA is being completed for issues after the end of a sprint (thus the story is incomplete)
  • QA is being completed post-release

If it's the former, I would leave the story as incomplete in Sprint 1 and move it into Sprint 2 to complete QA. If an issue is incomplete, there's no benefit in closing it and creating a new ticket, as you're fragmenting the traceability of the full lifecycle of that story.

A sprint itself is a timebox where the intent is to complete all the stories - but if you don't, you still need to close it and move incomplete issues to the next sprint. If stories are moving between sprints and there is a bottleneck at QA, you need to identify this issue and work-out how to resolve it.

If it's the latter, closure does not have to equal release. You could class a story as complete within a sprint and release the version into production at a later stage. In this instance you could have a separate ticket for post-release QA and if a bug is located, raise it into the backlog populating the "Affects Versions" field to visualise where the bug was found.


Hi @Stephen Wright _Elabor8_ , thank you very much for the response. 

The main issue we are trying to solve for is capturing the time spent for each issue in each sprint. We are not currently using story points and just going off of estimated time and time logged. The main reason we would like to split out tickets is to make sure we are recording the time spent in each sprint. 

We are thinking that we will perform QA on live during release for tickets that need QA after being put on production. If there are substantial issues, then we will create a new ticket to debug those issues. If there are no changes needed, or if QA is complete before the release, then we will not create a new ticket or split out. Does this sound like a process you would recommend?

Hi @Chris deMontmollin 

That's an option - I'd usually advise:

  • If there is a time gap between completing a story and releasing it (eg. releases are only once a month), then close the story in the sprint and make the release as a separate entity. You might also want to raise a "QA" ticket in whatever sprint you're in when the release is made.
  • If there is no/minimal time gap between completion and release - i.e both are in the same sprint - then you should consider keeping the story open.

The second scenario doesn't stop you creating separate bugs though (linked to the story), nor closing the story during the sprint. It depends on your own team's definition of done - if done is classed as released but not verification it works, then that would make closing it acceptable.


In regards to tracking time per sprint, I'd consider whether there's an app which can help you achieve this view so it doesn't drive a decision on your ways of working - for example, Simple Timesheet & Worklog Gadget has a time spent per sprint view.


@Stephen Wright _Elabor8_ That all makes sense. Thank you for the recommendation on the app. I am in the process of seeing what we can do about investing in one of those apps, but do not have approval yet. Basically we aren't doing anything that we shouldn't do just to record time spent in a sprint.

The main issue is the number of times we are moving tickets to the next sprint, of which the original question is a part. Many of them are less than 50% complete for a variety of reasons. Your answers definitely helped get us closer though, so thank you kindly.

Hi @Chris deMontmollin 

No worries :)

As a side option, you could look to visualise the issues which move across multiple sprints, so you can investigate why this is happening.

The Sprint Report will show you stories not completed each sprint, providing a view of carry over into the next sprint based.

Reports like the Epic Burndown and Release Burndown can also show scope at the start vs end of a sprint - so you can see both progress and scope change. These have a breakdown of work completed / incomplete below the chart.


Aside from this, you could search for the issues and choose how to display these - for example, display them in Filter Gadgets on a dashboard:

For example:

Sprint in closedSprints() and (Sprint in openSprints() or Sprint in futureSprints())

^ This would find issues which have been in a previous sprint and are either in an open, active sprint or a future planned sprint. This highlights issues which have not been completed previously.


@Stephen Wright _Elabor8_ So, you would not recommend splitting out tickets that have about half of their work do to complete the remainder the next sprint? We were experimenting with this to show that we dedicated x amount of hours in Sprint 1, and would have needed x+n hours to complete the entire ticket. The idea is that it will give us a specific number for how many more hours we would have needed across all tickets to complete the work that sprint. It does cause the problem of having to figure out which tickets are split to calculate the total time required to complete a specific task. 

The main issue may be that we are not making tickets small and specific enough, but there is quite a bit of pushback on that because it makes it harder to track.

Hi @Chris deMontmollin 

It is up to each team to decide on their way of working - I personally wouldn't.

The target is to complete work within a sprint. If an issue is incomplete and you close/clone it to the next sprint, you're losing the value of seeing where an issue spans multiple sprints.

From a time perspective, I would track Original Estimate vs Time Spent - this would allow you to see where an issue taking longer than expected to complete. You can also compare estimates to your Velocity and Burndown / Burnup to solidify where estimates are too high. You can also use reports like the Control Chart or CFD to look for bottlenecks or outliers in cycle time.

I appreciate the pushback here - but to an extent you already are splitting the tickets into smaller pieces, by cloning them between sprints. If traceability is the concern I would consider either using Linked Issues to relate the smaller pieces of work together, or whether some Stories are really Epics in size, which can span multiple sprints.


Suggest an answer

Log in or Sign up to answer

Community Events

Connect with like-minded Atlassian users at free events near you!

Find an event

Connect with like-minded Atlassian users at free events near you!

Unfortunately there are no Community Events near you at the moment.

Host an event

You're one step closer to meeting fellow Atlassian users at your local event. Learn more about Community Events

Events near you