Come for the products,
stay for the community

The Atlassian Community can help you and your team get more value out of Atlassian products and practices.

Atlassian Community about banner
Community Members
Community Events
Community Groups

How Do You Do Sprint Metrics?

I have been trying to find a way to do multi-team, mostly (or entirely) automated Sprint metrics for a long time now. I had a process that mostly worked for Jira Server\DC:

  1. I built a google sheet with a set of pre-defined queries, one for each metric I wanted to collect. For example, to collect the initial commitment of a sprint I had:
    (sprint = "[SPRINT NAME]" or issuefunction in removedAfterSprintStart("[BOARD NAME]","[SPRINT NAME]")) and issuefunction not in addedAfterSprintStart("[BOARD NAME]","[SPRINT NAME]") and issueFunction in aggregateExpression("Total Story Points", "StorySize.sum()")
  2. Each team had their own tab of the query sheet, at the top of which they would define their board and sprint. Some logic would replace the "[SPRINT NAME]" and "[BOARD NAME]" with the correct variables and output the customized list of queries for their team.
  3. At the end of each sprint, each team would update their "sprint name" field and run the set of queries, putting each result into a Google form.
  4. The output of the google form was another Google sheet, which had all sorts of analytics and aggregation logic built into it.

I recently joined an organization which uses Cloud, and as it turns out, some of the key functions I relied on (namely "aggregateExpression", "removedAfterSprintStart", "incompleteInSprint", and "completeInSprint") don't exist because they're not possible via the API.

I could probably muddle through without "aggregateExpression", but retrieving the initial sprint commitment is impossible without "removedAfterSprintStart". This is kind of a key metric, so not collecting it is not really an option.

Has anyone found an add-on which provides these metrics in a queriable\exportable manner? It feels like the only solution at this point would be to pull an "initial sprint commit" and perform some kind of diff analysis.

I'd like to avoid having my scrum leads need to manually read the sprint report; that report has bugs in it, but also having humans manually read a report so they can input numbers into another report is a tremendous waste of potential.

0 answers

Suggest an answer

Log in or Sign up to answer