Showing results for 
Search instead for 
Did you mean: 
Sign up Log in

Earn badges and make progress

You're on your way to the next level! Join the Kudos program to earn points and save your progress.

Deleted user Avatar
Deleted user

Level 1: Seed

25 / 150 points

Next: Root


1 badge earned


Participate in fun challenges

Challenges come and go, but your rewards stay with you. Do more to earn more!


Gift kudos to your peers

What goes around comes around! Share the love by gifting kudos to your peers.


Rise up in the ranks

Keep earning points to reach the top of the leaderboard. It resets every quarter so you always have a chance!


Come for the products,
stay for the community

The Atlassian Community can help you and your team get more value out of Atlassian products and practices.

Atlassian Community about banner
Community Members
Community Events
Community Groups

How to track time spikes in Jira graphs and reports?


Is there a Jira report or graph to show why a team's velocity and points dipped because of Spikes in a sprint? Spikes in my understanding use time like hours to estimate level of effort of research.  Therefore, the velocity drops, and there are less points completed, and stakeholders ask why was there a drop in the velocity the last two sprints! How do you show in the graph how the spikes contribute to that?

1 answer

1 accepted

0 votes
Answer accepted
Bill Sheboy
Rising Star
Rising Star
Rising Stars are recognized for providing high-quality answers to other users. Rising Stars receive a certificate of achievement and are on the path to becoming Community Leaders.
Nov 29, 2022

Hi @Ryan Johnson D 

You seem to describe your team is using a time-box for spikes, rather than sizing them with story points.  Thus the team "commitment" at sprint planning is potentially lower than your expected velocity.  (Of note, some teams also use a fixed sizing for spikes in order to time-box them.)

I do not know of any built-in charting to show extra annotation for the issues. 

This topic seems worthy of a conversation with stakeholders to understand how capacity (and sprint planning choices) impact velocity and value delivery.  Perhaps consider describing this to stakeholders at your sprint reviews, and possibly adding an indicator to the summary (e.g. "SPIKE -- trying to..." so that you can show the Jira built-in sprint reports with the summary visible.

If you still want this on a burn chart, please investigate the marketplace for reports that can do this type of annotation...or export the Jira data to build your own reporting to account for this impact.

Kind regards,

Hi Bill, How have you seen people do the fixed-sizing method (compared to hours) for spikes?

Correct. Spikes are normally for things a team hasn't done before. It's "complex;" too many unknowns. They don't "what it would take to do that." It requires research first.

I was wondering if there was a placeholder for Time-spikes on the velocity charts that captures why their velocity dropped. 

Bill Sheboy
Rising Star
Rising Star
Rising Stars are recognized for providing high-quality answers to other users. Rising Stars receive a certificate of achievement and are on the path to becoming Community Leaders.
May 16, 2023

Hi @Ryan Johnson D 

Ultimately this can be improved with communication to stakeholders, and transparently explaining what the team is doing and why.

To answer your second question first, the built-in charts have little configuration for mixed things like sizing with story points and an odd time-based item in the same sprint.

For fixed-size things like a spike, it may depend on what/how your team uses story point sizing:

  • If your team uses story points as "a forecast of effort/complexity for an item", just pick an upper limit for a spike (e.g., 1 point)
  • If your team only sizes things which have value to the end-stakeholder, consider not sizing them at all, and just manage the fixed-size procedurally using the standup and conversation

One exception might be if your team uses different types of spikes, as described by some practitioners like Chris Sterling in Managing Software Debt: Building for Inevitable Change, and so you might decide to just size spikes of certain types.

  • research: broad, foundational work to gain knowledge; nothing released to production for this one
  • spike: quick and dirty implementation, designed to be thrown away and not released to production
  • tracer bullet: a very narrow implementation of production quality which is released to production; really just a tiny story

Kind regards,

Suggest an answer

Log in or Sign up to answer