Jira 8.0 server version - consuming all memory of server

Felipe Santos February 28, 2019

hi fellas, the app has an issue, i don't know what is but start to consuming all memory available, wherever what i insert, 6GB, 8GB consume all and become very very slow.

i am using a dedicated server with 8GB of ram with a xeon processor 3.2gz

i had check the requisits and its far from the minimum.

 

 

 

7 answers

1 vote
Daniel Eads
Atlassian Team
Atlassian Team members are employees working across the company in a wide variety of roles.
March 8, 2019

Hey Felipe,

Nothing immediately jumps out to me as off from the output you provided from ps aux. One thing I can think of that would make Java go above its max heap limit is the PermGen (Java 7) / Metaspace (Java 8+) although it seems strange that it would go so high.

You can limit the max amount of Metaspace that Java can allocate with this flag, where the value is in MB:

-XX:MaxMetaspaceSize=1024

This flag can be added to the JVM_SUPPORT_RECOMMENDED_ARGS section in your setenv.sh file as described in Setting properties and options on startup. Note that Jira has to be restarted for changes in this file to take effect.

 

It would be useful to know what's happening inside the JVM as it relates to memory allocation. There are a couple of tools you can use to see where the heap is being allocated and if it has somehow escaped its defined limit of 1GB. These tools would also help you see if your metaspace size somehow grew staggeringly large:

Finally if this is impacting your instance's ability to function and the above information didn't help you reach a resolution or get an idea of what's going wrong, we can get a support ticket going at support.atlassian.com/contact - this will let us get a zip file from your instance containing some critical information about your environment that will help track this down faster.

Cheers,
Daniel | Atlassian Support

gleads March 13, 2019

without success

Like Shannon S likes this
0 votes
gleads March 15, 2019

ABRT has detected 5 problem(s) For more info run: abrt-cli list --since 1552598286
the repport is listed above. 


what is you impression?


time: Mon 11 Feb 2019 04:36:52 AM -02
cmdline: /usr/lib/systemd/systemd-logind
package: systemd-219-62.el7_6.3
uid: 0 (root)
count: 16
Directory: /var/spool/abrt/ccpp-2019-02-11-04:36:52-4892

id 9841677650a582456a49056f83c69b24e05dd1f8
reason: systemd-journald killed by SIGABRT
time: Tue 12 Mar 2019 04:14:25 AM -03
cmdline: /usr/lib/systemd/systemd-journald
package: systemd-219-62.el7_6.5
uid: 0 (root)
count: 2
Directory: /var/spool/abrt/ccpp-2019-03-12-04:14:25-30925

id dbc1f6f99d42fe0717d1f925ef299fbfb0564227
reason: systemd-journald killed by SIGABRT
time: Thu 21 Feb 2019 04:02:41 AM -03
cmdline: /usr/lib/systemd/systemd-journald
package: systemd-219-62.el7_6.3
uid: 0 (root)
count: 2
Directory: /var/spool/abrt/ccpp-2019-02-21-04:02:41-29702

id 12454199678e80315f99fbdde3ce3b03b366b61c
reason: NMI watchdog: BUG: soft lockup - CPU#4 stuck for 24s! [java:16010]
time: Fri 15 Mar 2019 02:20:19 AM -03
cmdline: BOOT_IMAGE=/vmlinuz-3.10.0-957.5.1.el7.x86_64 root=/dev/mapper/centos-root ro crashkernel=auto
rd.lvm.lv=centos/root rd.lvm.lv=centos/swap rhgb quiet LANG=en_US.UTF-8
package: kernel
uid: 0 (root)
count: 1
Directory: /var/spool/abrt/oops-2019-03-15-02:20:10-21804-1
Reported: cannot be reported

id cfb648451bb44fa9772907c5f6f3e7388e921937
reason: NMI watchdog: BUG: soft lockup - CPU#2 stuck for 24s! [gmain:5194]
time: Fri 22 Feb 2019 05:11:40 PM -03
cmdline: BOOT_IMAGE=/vmlinuz-3.10.0-957.5.1.el7.x86_64 root=/dev/mapper/centos-root ro crashkernel=auto
rd.lvm.lv=centos/root rd.lvm.lv=centos/swap rhgb quiet LANG=en_US.UTF-8
package: kernel
uid: 0 (root)
count: 2
Directory: /var/spool/abrt/oops-2019-02-22-17:11:37-5093-3
Reported: cannot be reported

0 votes
Felipe Santos March 14, 2019

Sorry fellas, i was precipitate.
the issue continues.

after Daniel Eads suggestion, app incredible very fast, but the memory still consuming and growing indefinitely.

i will continue with this research, if your team has an idea, please share here,

thanks

0 votes
Rene C_ _Atlassian Support_
Atlassian Team
Atlassian Team members are employees working across the company in a wide variety of roles.
March 8, 2019

Hello, Felipe! I believe you can benefit from this article, which includes all the information you would need to troubleshoot performance issues. This is a tricky kind of problems to troubleshoot and it always depends on several factors related to your environment, but hopefully this will help giving some clarity:
https://confluence.atlassian.com/jirakb/troubleshooting-performance-problems-336169888.html

It is important to mention that Jira cannot use more memory than what you allocate, so make sure on restricting how much memory it I using by following https://confluence.atlassian.com/adminjiraserver/increasing-jira-application-memory-938847654.html

0 votes
gleads March 1, 2019

Java is consuming all memory.
now i put 12gb of ram

0 votes
gleads March 1, 2019

jira1    19561 88.4 12.5 6380996 1524248 ?     Sl   02:13  20:03 /opt/atlassian/jira/jre//bin/java -Djava.util.logging.config.file=/opt/atlassian/jira/conf/logging.properties -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager -Xms384m -Xmx1024m -XX:InitialCodeCacheSize=32m -XX:ReservedCodeCacheSize=512m -Djava.net.preferIPv4Stack=true -Djava.net.preferIPv4Addresses=true -Djava.awt.headless=true -Datlassian.standalone=JIRA -Dorg.apache.jasper.runtime.BodyContentImpl.LIMIT_BUFFER=true -Dmail.mime.decodeparameters=true -Dorg.dom4j.factory=com.atlassian.core.xml.InterningDocumentFactory -XX:-OmitStackTraceInFastThrow -Djava.locale.providers=COMPAT -Datlassian.plugins.startup.options= -Djdk.tls.ephemeralDHKeySize=2048 -Djava.protocol.handler.pkgs=org.apache.catalina.webresources -Dorg.apache.catalina.security.SecurityListener.UMASK=0027 -Xloggc:/opt/atlassian/jira/logs/atlassian-jira-gc-%t.log -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=5 -XX:GCLogFileSize=20M -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps -XX:+PrintGCCause -Dignore.endorsed.dirs= -classpath /opt/atlassian/jira/bin/bootstrap.jar:/opt/atlassian/jira/bin/tomcat-juli.jar -Dcatalina.base=/opt/atlassian/jira -Dcatalina.home=/opt/atlassian/jira -Djava.io.tmpdir=/opt/atlassian/jira/temp org.apache.catalina.startup.Bootstrap start
root     21089  0.0  0.0 112708   980 pts/1    S+   02:35   0:00 grep --color=auto java
[root@admin bin]# free -m
              total        used        free      shared  buff/cache   available
Mem:          11853       10087         150          16        1615        1442
Swap:          9535         168        9367

0 votes
Ismael Jimoh
Rising Star
Rising Star
Rising Stars are recognized for providing high-quality answers to other users. Rising Stars receive a certificate of achievement and are on the path to becoming Community Leaders.
February 28, 2019

Hi @Felipe Santos 

It would be difficult to say what is going on without some sort of tool that is analysing the performance and some thread dump.

I would ask if you made any changes to JIRA setting itself(It comes with a default min/max memory allocation in the setenv file found in your JIRA installation directory).

  • Can you check this value?

Also check what other applications are running before JIRA starts up and how much memory they consume to make sure that when JIRA starts up, there is enough resource for it to run as intended.

Suggest an answer

Log in or Sign up to answer