JIRA 6.0.5 performance

Romit Sen September 9, 2013

We upgraded from v4.2.1 to v6.0.5 to increase performance..it did help but I think we can fine tune further..currently at high loads, number of connections on the JVM goes high and JIRA goes slow.

We have about 800,000 issues, 10,000+ users, ~300 projects.

We have 12 cores, 20 gigs of heap space. Here are are the starup parameters --

-Djava.util.logging.config.file=/usr/local/atlassian-jira-6.0.5-standalone/conf/logging.properties

-XX:MaxPermSize=384m

-Xms20480m

-Xmx20480m

-Dcom.sun.management.jmxremote

-XX:HeapDumpPath=/appl/heapdumps

-XX:+HeapDumpOnOutOfMemoryError

-XX:+PrintGCDateStamps

-XX:+PrintGCTimeStamps

-verbose:gc

-Xloggc:/appl/jira/data/log/atlassian-jira-gc.20130908-233033.log

-Dorg.apache.jasper.runtime.BodyContentImpl.LIMIT_BUFFER=true

-Dmail.mime.decodeparameters=true

-Dsvnkit.http.methods=Basic,Digest,Negotiate,NTLM

-XX:NewSize=8192m

-XX:+UseParallelOldGC

-XX:+UseCompressedOops

-Dcom.sun.management.jmxremote.port=8066

-Dcom.sun.management.jmxremote.authenticate=true

-Dcom.sun.management.jmxremote.password.file=/usr/local/tomcat/conf/jmx-password.txt

-Dcom.sun.management.jmxremote.access.file=/usr/local/tomcat/conf/jmx-access.txt

-Dcom.sun.management.jmxremote.ssl=false

-Djava.awt.headless=true

-Datlassian.standalone=JIRA

-Dorg.apache.jasper.runtime.BodyContentImpl.LIMIT_BUFFER=true

-Dmail.mime.decodeparameters=true

-Datlassian.plugins.enable.wait=300

-XX:+PrintGCDateStamps

-Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager

-Djava.endorsed.dirs=/usr/local/tomcat/endorsed

-classpath

/usr/local/tomcat/bin/bootstrap.jar:/usr/local/atlassian-jira-6.0.5-standalone/bin/tomcat-juli.jar

-Dcatalina.base=/usr/local/atlassian-jira-6.0.5-standalone

-Dcatalina.home=/usr/local/tomcat

-Djava.io.tmpdir=/usr/local/atlassian-jira-6.0.5-standalone/temp

org.apache.catalina.startup.Bootstrap

start

6 answers

0 votes
Romit Sen September 12, 2013

Very recenly, I have started observing , that when I try to reccyle the JVMs to get rid of the high number of thread count , JIRA doesn't start up easily --

com.atlassian.util.concurrent.LazyReference$InitializationException: com.opensymphony.module.propertyset.PropertyImplementationException: 
Unable to establish a connection with the database. (null,  message from server: "Host 'tryjira00.intra.searshc.com' is blocked because of many connection errors; 
unblock with 'mysqladmin flush-hosts'")
 
Are the 2 issues connected?
0 votes
Romit Sen September 12, 2013

Our server has multiple versions of java

[jira@tryjira00 log]$ java -version

java version "1.6.0_20"

Java(TM) SE Runtime Environment (build 1.6.0_20-b02)

Java HotSpot(TM) 64-Bit Server VM (build 16.3-b01, mixed mode)

However JIRA is pointing to 1.7.0_21

jdk -> /usr/java/jdk1.7.0_21

Does that make a difference?
Dave C
Atlassian Team
Atlassian Team members are employees working across the company in a wide variety of roles.
September 12, 2013

Different versions being installed won't make any difference - JIRA only uses the version of Java it picks up in the JAVA_HOME variable for archive installations (installed from a ZIP or tarball) or it uses the default bundled JRE for standalone.

Using an unsupported version of Java can definitely make a difference! However from memory you're using a supported version. The core problem is the size of the instance and also the heap space provided to the JVM.

0 votes
Romit Sen September 11, 2013

Thanks David...I think I can clear up some of my understanding about the heap organization from this blog !

0 votes
JohnA
Rising Star
Rising Star
Rising Stars are recognized for providing high-quality answers to other users. Rising Stars receive a certificate of achievement and are on the path to becoming Community Leaders.
September 9, 2013

Hi Romit,

I'm not sure you would actually need a 20GB heap, and in fact a heap that large might even be counter-productive, but it's simply not possible for us to advise you based solely on the size of your instance and the starup parameters. JVM tuning is a science and no scientist would give you meaningful predictions based on such limited data so I would be VERY wary of any advice given based solely on the info provided.

You should seriously consider implementing some kind of monitoring system to identify what resources are being used by the application, then give it 30% more than the maximum you see on a day to day basis. However, the general look of your startup parameters are ok and if the application isn't struggling then don't change them because keeping the application stable is always the priority.

However, the biggest performance boosts always come from upgrading the JVM version, (when there have been performance improvements in the architecture), so it's worth monitoring the JVM release notes for performance improvements as this is where the biggest benefits come from. Given the size of your instance you might also want to consider bringing in an Atlassian Expert to advise you because Answers probably isn't the place for such tuning, (at least when the questions are open-ended anyway).

All the best,
John

0 votes
Theinvisibleman
Atlassian Team
Atlassian Team members are employees working across the company in a wide variety of roles.
September 9, 2013

Hi Romit,

First things first, that's a big instance, with a lot of heap provided to it. Sometimes, the sheer size of your instance can cause some issues as well. I would recommend that you have a look at this guide on how to scale your JIRA instance - Scaling JIRA

From there, perhaps you might want to use some of the information there as reference and scale your instance accordingly

0 votes
Harry Chan
Rising Star
Rising Star
Rising Stars are recognized for providing high-quality answers to other users. Rising Stars receive a certificate of achievement and are on the path to becoming Community Leaders.
September 9, 2013

Hi, are you using the latest Java 7? Are you sure you need such a large heap size? It might not be a good idea as it may cause slow garbage collection. If possible, look into a different garbage collection algorithm than -XX:+UseParallelOldGC since you've got such a large heap.

-XX:+UseCompressedOops is already set by default. Not needed.

JohnA
Rising Star
Rising Star
Rising Stars are recognized for providing high-quality answers to other users. Rising Stars receive a certificate of achievement and are on the path to becoming Community Leaders.
September 9, 2013

My experience is that -XX:+UseParallelOldGC is actually a lot more efficient than
-XX:+UseG1GC, (and unless you know what you are doing then using the -XX:-UseConcMarkSweepGC is a very bad idea), so I wouldn't recommend using a different collector unless tests in a UAT have specifically shown they will perform better.

Suggest an answer

Log in or Sign up to answer