Let's say we have a user story comprised of 3 technical tasks for the developer and one technical task for the QA resource. Once the developer(s) has completed the 3 development technical tasks, the QA resource knows it's time to test the user story. By this point, the QA resource has already started work on his technical task by analyzing the user story and writing scripts. So, now the QA resource begins to test the user story as a whole. So, let's say that the QA resource finds 3 bugs in the user story. What is the best way to document and track those bugs?
We've considereed a few options, but none seem to be ideal. He could add comments to the appropriate technical task or tasks that are already there and drag the affected task(s) back into the 'To Do' column. The problem with this is that the QA Resource may not know which technical task to add the comment to and if the task ends up going back and forth a few times, the comments could become too hard to parse through.
Another option would be to add a new technical task under the same user story to describe the bug(s). But, will this make it difficult to track and count outstanding bugs? (We considered either using a label for these technical tasks or starting each issue with the literal 'BUG:'.)
Another option would be to enter an new issue with a type of 'Bug'. But, these would not be listed in the same swimlane as the original user story... seems like it would be difficult for the developer and the QA resource to fully understand the context of the bug.
Our ideal solution would be if Jira/Greenhopper allowed us to add a sub-task to a user story with the type 'Bug'. But, the only sub-task types that we can add areTechnical Tasks
I'm sure all development teams have tackled this situation. How do others address it? Thanks for your help!
Dan... we ended up using the solutiuon suggested by Chris McFadden. We now have a new sub-task issue type under User Stories called Story Bug. These issues follow the same workflow as Technical Tasks. The columns on our Rapid Board are To Do, In Progress, In QA and Done. So, once the developers have completed all of the development Technical tasks under a User Story (and moved them to the Done column), the QA Analyst begins work on the QA Testing Technical task. If he finds no issues, he moves that technical task to the Done column. If he does find bugs, he creates a Story Bug issue for each one and then, the developers work on those. When the developers finish work on a Story Bug, they move that issue to the In QA column and the QA Analyss knows he can begin testing the bug. Once all of the Story Bugs are 'Done', the QA Analyst finishes his QA on the user story as a whole and moves his QA Testing technical task to the Done column. Hope that helps!
Fantastic, that makes a lot of sense. How do you approach the prioritization of bugs within a sprint? For example, if there's a critical bug related to each story, how do you ensure the team addresses those before moving on to lower priority bugs?
Lastly, for bugs that can't be addressed within the sprint, but aren't critical, do they get converted from a Story Bug to a defect and moved into the backlog?
Regarding prioritization of bugs within a sprint, our developers always work the board from the top down... so, Story Bugs within a higher priority user story are worked on first. And, you are correct regarding non-critical story bugs... if there's no time to work on them during the current sprint, they are closed and copied over as regular 'Bug' issue types to be included in a future sprint.
I find it usefull to have a new issue type called "Story Bug" to track bugs during QA testing.
My problem has been on how to do the completion of a dev-task. The ideia of having many columns like Todo, In Progress, Waiting Deploy, QA, Done does not sounds good for me. I would like to have something more simple like Todo, In Progress, Done. But with this scenario, I don't know how the QA team would work. How would the know that the story is ready for testing if the developer has not made the deploy yet?
The burndown chart will not reflect the story completion until all its sub tasks are completed and there is dependency on resources among team (there could be wait time). For example, after completing the Dev, QA might take some to start testing and after QA completion, Dev might take some time start fixing the reported bugs. Daily tracking of doneness will be impacted.
Please advise if there is any reliable solution of tracking the sprint progress on daily basis when assigned team member completes the user story task (Development, QA/Testing, Bug Fixes/Retesting)
👋 Hi there Jira Community! A few months ago we shared with you plans around renaming epics in your company-managed projects. As part of these changes, we highlighted upcoming changes to epics on...
Connect with like-minded Atlassian users at free events near you!Find an event
Connect with like-minded Atlassian users at free events near you!
Unfortunately there are no Community Events near you at the moment.Host an event
You're one step closer to meeting fellow Atlassian users at your local event. Learn more about Community Events