Create
cancel
Showing results for 
Search instead for 
Did you mean: 
Sign up Log in

Next challenges

Recent achievements

  • Global
  • Personal

Recognition

  • Give kudos
  • Received
  • Given

Leaderboard

  • Global

Trophy case

Kudos (beta program)

Kudos logo

You've been invited into the Kudos (beta program) private group. Chat with others in the program, or give feedback to Atlassian.

View group

It's not the same without you

Join the community to find out what other Atlassian users are discussing, debating and creating.

Atlassian Community Hero Image Collage

How should we handle the testing effort in a scrum sprint?

We are currently on a two week sprint, and at the end of the sprint we have working functionality that we start delivering through a Beta/Staging/Production type environment.  When the developer is done with their Jira issue, they set it to "Ready to test" custom status.  The testers then see these issues and start testing them.  However, the tester may or may not finish the testing task within the sprint.  We want to track the Velocity of the developers so we have better insight to estimations etc...

Has anyone been able to successfully integrate testing into a sprint without killing the velocity?  I know velocity is a (and should be) a team velocity, not an individual.  But if it's not an individual developer, how should we estimate using velocity numbers?

1 answer

0 votes
Marianne Miller Community Leader Feb 10, 2020

I think it depends on your definition of DONE.  If the item is supposed to be ready for release (the output of a Sprint increment), then this poses a challenge.  I struggle with the fact that there really isn't a testing role Agile, and that quality is part of the equation.   We started out trying the leapfrog the QA/testing effort in a follow on Sprint and found that we were not able to release on a reliable schedule.  We now ensure that there is an immediate hand off to QA/testing as Dev work is done and make every effort to get it thoroughly tested and ready for release.  The Testers estimate story points as part of the development team, and if there appears to be a more complex testing effort, we make sure that the backlog items reflects that.

Since incorporating testing into the Sprint cycle, we have been fairly predictable, and only miss a few items.  if it's not complete and ready for release, it rolls to the next Sprint, just like anything else.  but that is usually the exception now and not the rule.

I couldn't agree more, and I've used those exact words when trying to resolve this, so that you for the verification!  So do you have the testers estimate testing effort, then add that to a developer estimate?  And how do you handle the situation where development gets the work done right at the end of the sprint?  This is the situation we find ourselves in, a lot of the dev work is completed at the end of the sprint.  We have some devs who are more concerned with their individual velocity and don't care about the testing effort or how we handle it in the sprint.  But IMO it's supposed be a team velocity, meaning from dev to publish.

Suggest an answer

Log in or Sign up to answer
TAGS

Community Events

Connect with like-minded Atlassian users at free events near you!

Find an event

Connect with like-minded Atlassian users at free events near you!

Unfortunately there are no Community Events near you at the moment.

Host an event

You're one step closer to meeting fellow Atlassian users at your local event. Learn more about Community Events

Events near you