Create
cancel
Showing results for 
Search instead for 
Did you mean: 
Sign up Log in
Celebration

Earn badges and make progress

You're on your way to the next level! Join the Kudos program to earn points and save your progress.

Deleted user Avatar
Deleted user

Level 1: Seed

25 / 150 points

Next: Root

Avatar

1 badge earned

Collect

Participate in fun challenges

Challenges come and go, but your rewards stay with you. Do more to earn more!

Challenges
Coins

Gift kudos to your peers

What goes around comes around! Share the love by gifting kudos to your peers.

Recognition
Ribbon

Rise up in the ranks

Keep earning points to reach the top of the leaderboard. It resets every quarter so you always have a chance!

Leaderboard

Come for the products,
stay for the community

The Atlassian Community can help you and your team get more value out of Atlassian products and practices.

Atlassian Community about banner
4,457,191
Community Members
 
Community Events
176
Community Groups

Advice from tracking QA workflows for Scrum teams

Hi folks, I am wondering what the best practices are for tracking QA handoffs/workflows for Scrum teams.

Like any typical team, our squad pushes changes through several QA handoffs:

- Code review

- Visual qa

- Functional qa

- UAT

Currently these are set up as columns within our scrum board, but I've noticed that oftentimes our changes don't go through these linearly (waterfall), and will often parallelize (specifically vis and func qa). As such, folks don't use the jira board consistently and it can be difficult to understand the status of things.

 

Is it against best practice to move steps that could be parallelized, like visual qa and functional qa to "sub-tasks"? What are the risks in doing that? 

 

While the workflow makes more sense to me in parallel, I fear we will then lose the visibility into how long each status takes. However, if folks aren't using this linearly anyways, then we don't even have that data.

 

Thoughts?

2 answers

1 accepted

1 vote
Answer accepted

Hi @Mara Julin - The first question I always ask is whether your sprint flow is designed for these activities to complete within the same sprint as development. 

If the answer is yes, then sub-tasks may make sense to give you visibility into the various team members that may work against that story in a given sprint.

If the answer is no, I recommend separate issues for that activity and making use of links.  In this scenario, they can still be on the same sprint, but their sprint goals will be unique compared to the dev team. Sub-Tasks would otherwise hold the story hostage until QA is complete.

UAT is an entirely different animal. I try to coordinate a "release sprint" that is devoted to it where team's primary sprint goal is addressing any critical that come out of it and having enough lower priority items in the backlog that they can work on if UAT feedback is light.

Thank you for this! Yes all of these activities would be expected to be completed within the sprint.  They are also attached to the epic or project ticket since we can't launch without it so it wouldn't be an issue if it holds back a story since we can't release anyways.

 

My concern with going towards subtasks instead of statuses is that ideally I'd like to track how long each one of these steps are taking. I've seen various add-ons or automations about showing time spent based on the QA statuses, and since subtasks follow a different workflow, we wouldn't be able to. Do you have suggestions for that? 

Mark Segall Community Leader Jul 20, 2022

If you're using sub-tasks, you'll want to start with a simplified workflow because you're removing the need for all of the extra steps by having them split out.  I typically go with something like To Do >> In Progress >> In Review >> Done.

From here, I would leverage a component or custom field to establish the type of sub-task so you can create filters specifically for QA sub-tasks.  Then you can use any number of dashboard gadgets to get you the information you need like Resolution Time to track QA performance.

Like Christina Ingardia likes this

Gotcha, okay and thinking through pointing effort. Currently we point Functional QA to understand their capacity for a given week. I understand that you can point in sub-tasks, but it isn't counted in your velocity report. Does it roll-up to the parent task at all?

Is there a rationale as to why it isn't included in your velocity?

Story pointing on individuals or even team subsets is not agile best practice so that's why Jira does not support it.

To maximize efficiency, user story pointing should be based upon the entire team's estimation (every contributing team member should have a say in estimation poker).  Points are, by definition, about the "Story" and what it will take to achieve the definition of done for that story, whether that be development, QA, documentation, iterative releases from dev to staging, etc. I've found that tracking at the individual or sub-team level just doesn't work:

  • Team members feel like "big brother" is watching and establishes a toxic culture as individuals start getting defensive and/or pointing fingers as to why their task took extra time.
  • Senior engineers feel less inclined to help out their junior counterparts because they feel the need to look out for their own progress and not have to explain why they spent x amount of time mentoring their junior resource.  Similarly, the junior resource feels less inclined to reach out to senior resources because they don't want to be called out when said senior resource needs to explain why they didn't finish their task in time

Culture aside, points were simply designed as a way to simplify and speed up how teams can estimate issues.  If poker is done properly, they are only relative to the other stories in the sprint and once the sprint has completed, serve zero value other than the aggregate velocity to provide a basic trend analysis during retro ("We were higher this sprint... Great job everyone!", "We were lower this sprint - Oh yeah Timmy was out sick a couple days and we underestimated ABC-123")

0 votes

Hi @Mara Julin ,

If you are opened to using marketplace apps, I can possibly propose another approach. In order to be completely transparent, we are the maker of the Checklist for Jira server app.

You could alternatively, combine the visual qa and functional qa status together under the qa status. You then use a the checklist app and have it automatically populated with the items Visual qa and Functional qa. Each item can have their individual statuses (in progress, blocked, etc), something like: 

checklist qa.pgn.PNG

This way, you can have both activities in parallel. You can even add a Checklist Workflow Validator to ensure that the checklist is completed before allowing the issue to move to the UAT status.

The only drawback would be that it will possible to find how much time the issue was in the QA status but not in each Visual QA and Functional QA state.

Hope that helps,

Suggest an answer

Log in or Sign up to answer
TAGS

Atlassian Community Events