In my company, our workflow of a development task is as follows:
For an issue, e.g. bug, developers are responsible to transistion an issue from Open to Resolved. Then, QA are responsible to transistion issue from Resovled to Closed after testing.
So my intention is to setup 2 Agile Scrum Boards, one for the development team, another one for the QA team as follows:
Above doesn't work, because if there is an active QA sprint, DEV will not be able to start a sprint, because JIRA Agile only allow 1 active sprint in a board by default. When parallel sprint is enabled, it doesn't look nice, because in DEV work mode, issues in QA sprint which are Resolved appears in DEV sprint.
I have also tried the following:
Above doesn't work because after an issue has a resoltion after Resolve, the DEV board report mode will return an error because issue previously in the sprint filter (resolution is EMPTY) are not inside the sprint filter anymore (as during resolve, a resolution is filled in)
This is really a blocker for our comapny to use JIRA Agile.
Are there any solutions availabe? (The use case where an issue involves both the developer team and QA team makes sense to me...and developer team and QA team having a separate board also makes sense to me...because developer estimate an issue base on development effort and QA estimate an issue base on the testing effort
From a purely agile perspective (not the tool perspective which is what I know you're asking about), the best thing would be to not divide into separate sprints and boards by functional teams/workflow state, but to have a cross-functional view - a single sprint and board, especially if both teams are working on the same work (versus one having a subset of the others' work). Have the story points equal the combined complexity for both people to accomplish what is needed. Have a QA workflow state if you'd like to easily visualize what is being handled by the QA people at any given time. You can put a queue state in between if you'd like to have a pull system for the QA team rather than pushing work into QA when Dev is "done" (although Dev isn't really done until it passes QA).
From the tool perspective, your method throws up red flags to me. It seems like you're trying to get the tool to do something that's not natural rather than looking at how the tool expects things to work and update your workflow to accommodate. This is a common problem across all tools. I know I've done it a time or three.
If you're dead set on having separate boards and sprints within the same project, perhaps the better way would be to spawn a NEW issue for the QA team (via workflow transitions – maybe a separate issue type) if you want to have them function as a separate team. https://marketplace.atlassian.com/plugins/org.swift.jira.cot and https://marketplace.atlassian.com/plugins/net.customware.jira.utils.customware-jira-utilities may be helpful plugins for this.
I'll consider this more to see if I have better ideas.
W also have a similar issue, and have a separate QA team, with an overlapping sprint period. Dev sprints to dev complete are 3 weeks, QA sprints to release are a week and a half later than the corresponding dev sprint. In our case we have separate dev, qa, and release branches for code, with separate build processes, so it's natural to complete development, and then pull across into the qa branch for subsequent issue testing before moving to the release branch for package (staging) testing and deploy. Some issues are not released to production for several sprints, but are released to integration test servers accessed by partners, before signoff and deployment to production. For this reason, I'm thinking of using a Kanban style board for QA, once dev is complete (including dev testing, code and SQL reviews, before pushing to a Ready for QA status). I've seen various other postings in the JIRA fora where the "solution" proposed was to launch a NEW issue for the team. This is not workable, as it effectively doubles the number of issues, and makes it more difficult to track from beginning to end.
We actually do this. Our Development Team wanted their sprint statistics to reflect completed development work, and there is generally a delay in the QA process because while they test directly in Development, they also wait until everything is deployed to QA, and then test again in production. Our QA team uses a 'Kanban' board and they pull the work through different levels/environments of testing that are reflected in our process.
I've never had to add "resolution is not empty" to my queries. I believe you could remove that and your boards would work.
Connect with like-minded Atlassian users at free events near you!Find a group
Connect with like-minded Atlassian users at free events near you!
Unfortunately there are no AUG chapters near you at the moment.Start an AUG
We're bringing product updates and pro tips on teamwork to ten cities around the world.Save your spot