Forums

Articles
Create
cancel
Showing results for 
Search instead for 
Did you mean: 

Jira Server crashes with GC (Allocation Failure)

Nico van der Walt January 18, 2023

Good day!

We have a fairly large Jira server instance that keeps crashing over time complaining that it has run out of memory.

In the atlassian-jira-gc log this error is all over the place:
2023-01-19T08:40:45.611+0200: 74806.089: [GC (Allocation Failure) [PSYoungGen: 2282149K->24502K(2522624K)] 5247035K->2989882K(8115200K), 0.0646456 secs] [Times: user=0.17 sys=0.01, real=0.07 secs]
2023-01-19T08:41:03.633+0200: 74824.111: [GC (Allocation Failure) [PSYoungGen: 2260406K->16345K(2509312K)] 5225786K->2982095K(8101888K), 0.0645660 secs] [Times: user=0.17 sys=0.00, real=0.07 secs]
2023-01-19T08:41:09.456+0200: 74829.933: [GC (Allocation Failure) [PSYoungGen: 2252249K->206907K(2467328K)] 5217999K->3172824K(8059904K), 0.0837510 secs] [Times: user=0.30 sys=0.00, real=0.08 secs]
2023-01-19T08:41:15.535+0200: 74836.013: [GC (Allocation Failure) [PSYoungGen: 2413115K->17943K(2501120K)] 5379032K->2983989K(8093696K), 0.0627081 secs] [Times: user=0.17 sys=0.00, real=0.07 secs]
2023-01-19T08:41:21.018+0200: 74841.496: [GC (Allocation Failure) [PSYoungGen: 2224151K->15768K(2525184K)] 5190197K->2982585K(8117760K), 0.0722156 secs] [Times: user=0.17 sys=0.01, real=0.07 secs]
2023-01-19T08:41:29.668+0200: 74850.146: [GC (Allocation Failure) [PSYoungGen: 2257816K->270816K(2512896K)] 5224633K->3244056K(8105472K), 0.0822419 secs] [Times: user=0.34 sys=0.00, real=0.08 secs]

We have tried the solution mentioned here:
https://community.atlassian.com/t5/Jira-Software-questions/Garbage-Collector-Alllocation-Failure/qaq-p/1322269

which advised setting the -Xmx and the -Xms value to be the same. We made these parameters the same but the error still occurs and Jira crashes over time.

Any advice here would be greatly appreciated!!

Unfortunately only my company's TechOps department has access to adjust server side settings but what I do know is:

Java VM Memory:
Total Memory = 7915 MB
Free Memory = 3971 MB
Used Memory = 3944 MB

Jira Info:
Version = 8.5.5

2 answers

1 accepted

0 votes
Answer accepted
Nico van der Walt May 4, 2023

For anyone who runs into this issue - we were able to resolve this by changing the default garbage collector from the one Jira ships with to G1CG - Since we made the change Jira has been stable for the last two months 

0 votes
Nic Brough -Adaptavist-
Rising Star
Rising Star
Rising Stars are recognized for providing high-quality answers to other users. Rising Stars receive a certificate of achievement and are on the path to becoming Community Leaders.
January 21, 2023

There is no simple answer to this.

One question though - the Java memory settings you have there, did you get them from the system information page?  I ask because that is only a snapshot view - it's the numbers from the system at the time you rendered the page.  It could vary wildly within seconds (the free/used, not the total)

You're going to need to do some, monitoring, testing, and possibly even deep investigations.

The monitoring thing is most important - you need to see what memory Jira is using over at least one period.  A period is the time from restarting it through to when it crashes.  You don't need a huge amount of detail, just used memory, but enough to draw simple graphs every few seconds are best.  I like graphs because you can easily see patterns in them.

In this case, there are two patterns I would expect to see, both are generally wavy and you will probably be able to spot nights and weekends having lower memory use.  The shape of the trend is more important though.

  • A trend upwards until the crash suggests you have a memory leak.
  • A flat trend, with sudden spikes and the crash at the end suggest a heavy process is kicking in and eating all the memory

There may be other patterns, but those two are the most common

The first test would be to increase the maximum allocated RAM, and see how that affects the graphs and crash frequency.

But then you're going to need to investigate what the server was doing.  If it's a leak, you'll need to do thread dumps to find out what processes are eating the memory.  If it's a spiked crash, you need to look at what the system was doing when it happened (thread dumps can be useful there too)

Suggest an answer

Log in or Sign up to answer