JIRA - full GC is run increasingly often

Andrei [errno] Community Champion Oct 31, 2016

... this might be a 1st Q of many to follow... trying to get to the root of the slow performing JIRA instance.

so: after a few 60+ sec delays we started monitoring GC logs (feeding from gc.log), looking for the likes of:

[Full GC (Metadata GC Threshold) [PSYoungGen: 34739K->0K(1223168K)] [ParOldGen: 184K->33658K(2796544K)] 34923K->33658K(4019712K), [Metaspace: 20904K->20904K(1069056K)], 0.0975657 secs] [Times: user=0.45 sys=0.04, real=0.10 secs]
...
[Full GC (System.gc()) [PSYoungGen: 6168K->0K(2101760K)] [ParOldGen: 33666K->38319K(2796544K)] 39835K->38319K(4898304K), [Metaspace: 25441K->25441K(1073152K)], 0.1013057 secs] [Times: user=0.56 sys=0.01, real=0.11 secs]
...
[Full GC (Ergonomics) [PSYoungGen: 2754374K->0K(2755584K)] [ParOldGen: 5592570K->3039221K(5592576K)] 8346945K->3039221K(8348160K), [Metaspace: 402263K->396816K(1433600K)], 13.2142183 secs] [Times: user=83.80 sys=2.88, real=13.21 secs]

I took a look at pstree and noticed what I think is a high number of java threads... is it normal?
the box runs only JIRA/with bundled Tomcat + nginx + postgres, no other java stuff...

jira ~ pstree
init─┬─atd
     ├─crond
     ├─dhclient
     ├─java───5767*[{java}]
     ├─6*[mingetty]
     ├─nginx───2*[nginx]
     ├─postmaster───16*[postmaster]
...skip

thanks!

3 answers

1 accepted

1 vote
Chris Fuller Atlassian Team Oct 31, 2016

If 5767 is the number of threads, then absolutely not.  Depending on how busy the system is and how you have it configured, I wouldn't normally expect to see more than a few hundred or so.  Whatever these threads are doing, it seems likely that they are related to your GC problems.

I would suggest using jstack to dump the threads.  The output format can vary slightly by operating system, but this is likely to give you something interesting:

jstack [JIRA_Process_ID_Here] | grep tid= | cut -d\" -f2 | sort | uniq -c

 

The output should look something like this:

cfuller@crf:~$ jstack 40568 | grep tid= | cut -d\" -f2 | sort | uniq -c
   1 Attach Listener
   1 C1 CompilerThread3
   1 C2 CompilerThread0
   1 C2 CompilerThread1
   1 C2 CompilerThread2
   1 DestroyJavaVM
   1 Finalizer
   1 GC task thread#0 (ParallelGC)
...

 

This is the count of threads with a given name, so if there are many duplicates of a single kind of thread, it should show up.  You should also be able to see if there is a whole series of several threads with similar names, like "pool-8-thread-1" through "pool-8-thread-5601".  In this latter case, since the thread name isn't helpful, jstack would have to catch one of these naughty threads in the act of doing something interesting for you to figure out where it came from.

Andrei [errno] Community Champion Oct 31, 2016

thanks!
hundreds of

"SP-SPTaskSync" #...some-number... prio=5 os_prio=0 tid=0x00007faeec21c000 nid=0x54ff waiting on condition [0x00007fae6afb9000]

googling for SP-SPTaskSync does not find anything... I guess you pushed me in the right direction anyway. will dig some more

Chris Fuller Atlassian Team Oct 31, 2016

I can't find a reference to anything like that anywhere in JIRA or any plugin we bundle.  My guess is that this is a third-party plugin you have installed.  It sounds like it might be one aimed at synchronizing issue data with some external source, so I'd look through the user-installed plugins to see if there's a good candidate in the list.  If you can find one of those threads that's actually doing something interesting (say more than 20 stack frames deep), then you can probably figure out the plugin's vendor from the class package names (meaning that if you see several frames from "org.example.foo-plugin" then that would be a big hint).

1 vote
David Currie Atlassian Team Oct 31, 2016

Hey, this is a bug in the BigPicture plugin - please disable it as soon as humanly possible and raise this with the developers. They can be contacted at BigPicture Support. Otherwise you'll likely start hitting OS thread limits and experience problems such as JIRA applications crash due to OutOfMemoryError unable to create new native thread which can result in outages.

Also as it's a thread pool disabling the plugin most likely won't drop the threads, however restarting JIRA will.

Edit: Actually looks like this is something SoftwarePlant use in a shared library, so it looks like it could be any of their plugins - they would best to verify with. I'd check if you have any SoftwarePlant plugins installed and disable them. We found this same problem with another customer using that specific plugin, BigPicture..

Hi, 

I haven't noticed this question before. The problem you described is caused by some old version of BigPicture indeed. It was fixed a while ago, so please upgrade.

In case of any issue please contact BigPicture Support via support@softwareplant.com

Suggest an answer

Log in or Sign up to answer
Atlassian Community Anniversary

Happy Anniversary, Atlassian Community!

This community is celebrating its one-year anniversary and Atlassian co-founder Mike Cannon-Brookes has all the feels.

Read more
Community showcase
Bridget Sauer
Published Thursday in Marketplace Apps

Calling all developers––You're invited to Atlas Camp 2018

 Atlas Camp   is our developer event which will take place in Barcelona, Spain  from the 6th -7th of   September . This is a great opportunity to meet other developers and get n...

82 views 0 5
Read article

Atlassian User Groups

Connect with like-minded Atlassian users at free events near you!

Find a group

Connect with like-minded Atlassian users at free events near you!

Find my local user group

Unfortunately there are no AUG chapters near you at the moment.

Start an AUG

You're one step closer to meeting fellow Atlassian users at your local meet up. Learn more about AUGs

Groups near you