It's not the same without you

Join the community to find out what other Atlassian users are discussing, debating and creating.

Atlassian Community Hero Image Collage

Jira server memory leak after upgrade to 8.3.0 Edited

Hello all,

I am having some difficulty with our instance of Jira server.

We have recently upgraded from 7.8.1 to 8.3.0, and after the upgrade, this memory leak that I can see when running 'top' on the Linux box started happening. 

memory_leak.jpg

We have increased the JVM memory allocated to Jira as well - 

JVM Input Arguments

-Djava.util.logging.config.file=/opt/atlassian/jira/conf/logging.properties -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager -Xms1024m -Xmx6096m -XX:InitialCodeCacheSize=32m -XX:ReservedCodeCacheSize=512m -Djava.awt.headless=true -Datlassian.standalone=JIRA -Dorg.apache.jasper.runtime.BodyContentImpl.LIMIT_BUFFER=true -Dmail.mime.decodeparameters=true -Dorg.dom4j.factory=com.atlassian.core.xml.InterningDocumentFactory -Datlassian.plugins.enable.wait=300 -XX:-OmitStackTraceInFastThrow -Djava.locale.providers=COMPAT -Datlassian.plugins.startup.options= -Djdk.tls.ephemeralDHKeySize=2048 -Djava.protocol.handler.pkgs=org.apache.catalina.webresources -Dorg.apache.catalina.security.SecurityListener.UMASK=0027 -Xloggc:/opt/atlassian/jira/logs/atlassian-jira-gc-%t.log -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=5 -XX:GCLogFileSize=20M -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps -XX:+PrintGCCause -Dignore.endorsed.dirs= -Dcatalina.base=/opt/atlassian/jira -Dcatalina.home=/opt/atlassian/jira -Djava.io.tmpdir=/opt/atlassian/jira/temp

Currently it take about 20 -24 hours before the instance runs out of memory, and we need to restart it.

Edit: 

Also, I thought it might be worth noting this message-
[GC (Allocation Failure) [PSYoungGen: 1876157K->141943K(1875968K)] 3998513K->2269910K(4828672K), 0.1624618 secs] [Times: user=0.27 sys=0.00, real=0.16 secs]

in the atlassian-jira-gc-2019-08-11_23-21-00.log.0.current log file.

In the same file on our test environment (I have not noticed any issue there) we get messaged like this - 

966803.171: [GC (Allocation Failure) 2019-08-11T18:27:51.021-0400: 966803.171: [DefNew: 283175K->1703K(314560K), 0.0301996 secs] 861510K->580041K(1013632K), 0.0303531 secs] [Times: user=0.02 sys=0.00, real=0.03 secs]

Any thoughts? or what should I look into next?

Thank you so much,

Andrei

 

 

2 answers

1 accepted

2 votes
Answer accepted
Petr Vaníček Community Leader Sep 24, 2019

Hi @Andrei Ghenoiu and @Jon Tice ,

try to set Xms value to at least 2048m, because Jira 8 needed more memory and sometimes there is not much time to scale it higher value and then it fail.

My recommendation is to set both JVM values to same value - based on size of your instance. But usually is 4096m enough. 

Thank you so much for the insight @Petr Vaníček .

I never thought of setting the Xms value higher. I will try that tonight after work and report back.

These are the current JVM argument settings for our Linux VM;

-Djava.util.logging.config.file=/opt/atlassian/jira/conf/logging.properties -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager -Xms4096m -Xmx8192m -XX:InitialCodeCacheSize=32m -XX:ReservedCodeCacheSize=512m -Djava.awt.headless=true -Datlassian.standalone=JIRA -Dorg.apache.jasper.runtime.BodyContentImpl.LIMIT_BUFFER=true -Dmail.mime.decodeparameters=true -Dorg.dom4j.factory=com.atlassian.core.xml.InterningDocumentFactory -Dcom.atlassian.jira.clickjacking.protection.exclude=/servicedesk/customer/portal/33,/servicedesk/customer/portal/44 -XX:-OmitStackTraceInFastThrow -Djava.locale.providers=COMPAT -Djdk.tls.ephemeralDHKeySize=2048 -Djava.protocol.handler.pkgs=org.apache.catalina.webresources -Dorg.apache.catalina.security.SecurityListener.UMASK=0027 -Xloggc:/opt/atlassian/jira/logs/atlassian-jira-gc-%t.log -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=5 -XX:GCLogFileSize=20M -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps -XX:+PrintGCCause -Dignore.endorsed.dirs= -Dcatalina.base=/opt/atlassian/jira -Dcatalina.home=/opt/atlassian/jira -Djava.io.tmpdir=/opt/atlassian/jira/temp

I'm not 100% sure which two JVM settings you recommend being the same, but I see that Xmx is double.

You can see our memory usage before upgrade from 8.2.X to 8.3.2 on the 9th, and how it seems to be leaked now after the upgrade https://slack-files.com/TDMLPGBFH-FNC0R18NP-2e97679c65

Petr Vaníček Community Leader Sep 24, 2019

Original post was about upgrade from Jira 7 to 8, so small JVM values can cause this (I experienced this problem on our test environment).

I recommend Xms and Xmx same because sometimes Java need more memory than it's now and sometimes OS is not able to server this needs in time - then it will fail on memory errors.

Like Andrei Ghenoiu likes this

@Petr Vaníček Thanks again for your suggestion. Last night the memory graph showed 36% free memory, and while prior to the change this would keep decreasing, this morning the free memory was back up to 83%. I will keep an eye over the next couple of days, and post an update then.

Like Petr Vaníček likes this

@Petr Vaníček After a few days of monitoring Jira and the memory usage, garbage collections works as expected, as well as the system memory.

Thank you so much!

Did you ever find a resolution to this issue?

Hi Jon,

Have you been running into a similar issues too?

On my end, we have this running on a VM, and after a patch and restart it's been fine as far as the VM running out of memory.

But as far as the Jira instance goes, after about a day or two, the memory graph is down to 5-9 % of free memory and Jira slow down, until it finally stops responding, so maybe something related to the garbage collection in Jira.

We have an Oracle DB for backend, and we already have the latest Oracle JDBC driver there (there was another mention of a memory leak in a different post and this was a solution that worked for some, but not for all)

 

I have yet to upgrade to the latest 8.4.x, so we'll see what happens then.

In our case, it appears that there was a java process that was not killed properly during the last upgrade. We killed all java processes and restarted the jira service and it seems to have returned to normal usage.

Like Andrei Ghenoiu likes this

Great you figured it out!

Suggest an answer

Log in or Sign up to answer
Community showcase
Posted in Jira

Demo Den Ep. 7: New Jira Cloud Reports

Learn how to use two new reports for next-gen projects in Jira Cloud:  Cumulative flow diagram and Sprint burndown chart. Ivan Teong, Product Manager, Jira Software, demos the Cumulative ...

303 views 1 3
Join discussion

Community Events

Connect with like-minded Atlassian users at free events near you!

Find an event

Connect with like-minded Atlassian users at free events near you!

Unfortunately there are no Community Events near you at the moment.

Host an event

You're one step closer to meeting fellow Atlassian users at your local event. Learn more about Community Events

Events near you