Showing results for 
Search instead for 
Did you mean: 
Sign up Log in

Earn badges and make progress

You're on your way to the next level! Join the Kudos program to earn points and save your progress.

Deleted user Avatar
Deleted user

Level 1: Seed

25 / 150 points

Next: Root


1 badge earned


Participate in fun challenges

Challenges come and go, but your rewards stay with you. Do more to earn more!


Gift kudos to your peers

What goes around comes around! Share the love by gifting kudos to your peers.


Rise up in the ranks

Keep earning points to reach the top of the leaderboard. It resets every quarter so you always have a chance!


Come for the products,
stay for the community

The Atlassian Community can help you and your team get more value out of Atlassian products and practices.

Atlassian Community about banner
Community Members
Community Events
Community Groups

How should we handle the testing effort in a scrum sprint?

We are currently on a two week sprint, and at the end of the sprint we have working functionality that we start delivering through a Beta/Staging/Production type environment.  When the developer is done with their Jira issue, they set it to "Ready to test" custom status.  The testers then see these issues and start testing them.  However, the tester may or may not finish the testing task within the sprint.  We want to track the Velocity of the developers so we have better insight to estimations etc...

Has anyone been able to successfully integrate testing into a sprint without killing the velocity?  I know velocity is a (and should be) a team velocity, not an individual.  But if it's not an individual developer, how should we estimate using velocity numbers?

2 answers

I am looking for the answer as well. 

0 votes
Marianne Miller
Rising Star
Rising Star
Rising Stars are recognized for providing high-quality answers to other users. Rising Stars receive a certificate of achievement and are on the path to becoming Community Leaders.
Feb 10, 2020

I think it depends on your definition of DONE.  If the item is supposed to be ready for release (the output of a Sprint increment), then this poses a challenge.  I struggle with the fact that there really isn't a testing role Agile, and that quality is part of the equation.   We started out trying the leapfrog the QA/testing effort in a follow on Sprint and found that we were not able to release on a reliable schedule.  We now ensure that there is an immediate hand off to QA/testing as Dev work is done and make every effort to get it thoroughly tested and ready for release.  The Testers estimate story points as part of the development team, and if there appears to be a more complex testing effort, we make sure that the backlog items reflects that.

Since incorporating testing into the Sprint cycle, we have been fairly predictable, and only miss a few items.  if it's not complete and ready for release, it rolls to the next Sprint, just like anything else.  but that is usually the exception now and not the rule.

I couldn't agree more, and I've used those exact words when trying to resolve this, so that you for the verification!  So do you have the testers estimate testing effort, then add that to a developer estimate?  And how do you handle the situation where development gets the work done right at the end of the sprint?  This is the situation we find ourselves in, a lot of the dev work is completed at the end of the sprint.  We have some devs who are more concerned with their individual velocity and don't care about the testing effort or how we handle it in the sprint.  But IMO it's supposed be a team velocity, meaning from dev to publish.

I am also looking for this answer

Suggest an answer

Log in or Sign up to answer