Come for the products,
stay for the community

The Atlassian Community can help you and your team get more value out of Atlassian products and practices.

Atlassian Community about banner
4,365,531
Community Members
 
Community Events
168
Community Groups

How to properly split effort of stories that fall into multiple sprints?

In my project it happens often that a user story "falls" out of the sprint where the most development effort has been put in, due to the testing etc not being completely done. So the effort put into a story in sprint x actually shows up in the next sprint (y) due to the testing being completed then. due to this the sprint reports are off, and it confuses people and makes planning a bit messy with effort estimations. also commitment and completed story points become weird to interpret.

Team set up is:

3 Development squads with 4 devs each, who pick up the first tests,
2 dedicated Testers, who deal with more integration, regression and performance testing.

Alternatives ive considered so far:

1. Splitting up user stories, marking one done in the "old" sprint, one in the "new" sprint

2. Manually track the effort spent at the end/start of the sprints, and keep separate reports (Dont like this at all)

3. Dividing user stories in parts of "as a developer" and "as a tester" to ensure the proper division of effort

To note, we do not follow the perscribed idea of scrum teams that have no specific roles, we do have a distinction between dedicated developers and testers.

 

Happy to hear your thoughts/experiences!

 

 

3 answers

I don't quite agree with this point

2. Dont do a dividing story for developer and tester. (this is like the head of a human has a one birthday and the legs have another)

The business need of a story is to deliver valuable functionality meeting the definition of done. Writing automated tests may serve another goal e.g. maintainability, sustainability, CI/CD pipeline etc. The business story not necessarily has this in its scope. What frequently happens is that manual testing/verification causes no delays, hence the quality condition would be most likely met. But the test automation can be a next step, can be prioritized differently, be bundled up with other QA-auto tasks, even be deprioritized/rejected, because nobody has 100% coverage for autotests, and 80% allows some movement. It is my personal opinion, ideally, QA automation goes as sub-task of the story (so no splitting), but depending on context, teams set up etc. it can be a separate flow of tasks.

I'm running into a similar issue. Unfortunately due to dependencies the user stories spill over to multiple sprints, this can lead to incorrect velocity as we move the story points from one sprint to another. Is there a way to introduce a new field (something like Remaining Story Points) that can allow us to still have the visibility of how many story points we used to have vs the story point for that sprint?

Sudarshan Community Leader Aug 11, 2021

Hello @Dipshikha Goyal 
Jira allows you to create a new custom fields - check with your jira administrator for the same.
https://support.atlassian.com/jira-cloud-administration/docs/create-a-custom-field/

The Story points are estimate numbers, they are only supporting factors to help your team provide a predictable delivery and size the work, have a discussion with the team and see if the dependencies can be cleared during your product grooming event or on the sprint planning day.

It is normal for a team to have story spillover for multiple sprint, because story points are estimates it cannot be perfect.!

the best call would be to not take a user story into the sprint if it cannot be done, if you still commit, then the problem is to solve the dependency first.

how would adding a new custom field named "remaining story points" help the team deliver better? I think by doing so, you are doing a good patch (coverup) for the spillover, it may not help.

Empiricism in scrum is all about previous experience.

0 votes
Sudarshan Community Leader Dec 21, 2020

Hello @Bouke Krediet  welcome

I suggest below points:
1. Make your user stories as small as possible so that they are "DONE" in one sprint 
(this is easy to tell but it takes efforts to get the perfect slicing, but it's fun to work it out with a team)
2. Dont do a dividing story for developer and tester. (this is like the head of a human has a one birthday and the legs have another)
A User Story is a user need, and it is an action which the user does,dividing it spoils it and does not provide valuable feedback.
(what if there is a defect/bug in the story (developed item) which is done will you get back and reopen it?
testing is a part of the development activity finish them with the story.

If there are incomplete item at the end of the sprint (overflows to the next sprint), ask your team to evaluate them again, this will help you realize the amount of work pending and sometime this can be combined into one single activity (a task or something) which will enable a closure for the incomplete items soon.

Hi Sudarshan,  thankyou for your thoughtful comment.  I too have struggled with this and really need an expert to weigh in.

For your #2 bullet above,  are you saying that the story workflow should contain both the development and the test?  

for your #3 bullet,  If I am in a test and find a problem,  then we have a bug ticket which we relate to the first dev story that has been closed.

 

I'm having a lot of trouble with this mainly due to metrics.  I think there may be two solutions:

1) if you cannot finish a story in a sprint,  create multiple stories and create links between the two.  So for each feature,  you will have a dev story and a test story.  If you find a problem in test,  you create a new bug story.

2) Create a workflow with multiple Done states.  So for example,  lets say I have the following states in my workflow (Parenthetical is the Jira state):

   1) TODO (ToDo)

    2) DEVELOPING (In Progress)

   3) PEER REVIEW (In Progress)

   4) PEER REVIEW COMPLETE (Done)

   5) IN TEST (IN PROGRESS)

   6) DONE (Done)

I think what I can do here is create three boards. Board 1 (First Sprint of the story) shows states between 1 and 4 and gives me metrics associated with that.  Board 2 (Second sprint of the story) has states 5 and 6,  and board 3 is the complete set of states,   steps 1 to 6.    This should give you metrics based on each Done State,  while also showing an overall states in its own board.  So we can show work being done even though the flow doesn't complete in one two week sprint.  If you find a problem in test,  then you are in the same workflow and can just move back to Developing to do the fix.  What are your thoughts on this method?

Suggest an answer

Log in or Sign up to answer
TAGS

Atlassian Community Events