Confluence and Jira out of memory issues (docker)

sietsedeglee August 26, 2020

Hi,

For a while now I've been trying to get Jira and Docker running stable in docker. I'm using the following docker-compose file:

version: "3"

services:
jiradb:
image: postgres:10.5-alpine
container_name: jiradb
restart: always
volumes:
- /etc/localtime:/etc/localtime:ro
- ./jiradb:/var/lib/postgresql/data
environment:
- TZ=Europe/Amsterdam
- PG_TZ=Europe/Amsterdam
- POSTGRES_USER=
- POSTGRES_PASSWORD=
- POSTGRES_DB=jira
jira:
image: atlassian/jira-software:latest
container_name: jira
restart: always
depends_on:
- jiradb
ports:
- 8080:8080
volumes:
- ./jira:/var/atlassian/application-data/jira
environment:
- TZ=Europe/Amsterdam
- PG_TZ=Europe/Amsterdam
- ATL_PROXY_PORT=443
- ATL_PROXY_NAME=jira
- ATL_TOMCAT_SCHEME=https
- ATL_TOMCAT_SECURE=true
- JVM_MINIMUM_MEMORY=1024m
- JVM_MAXIMUM_MEMORY=3072m
- JVM_RESERVED_CODE_CACHE_SIZE=512m
- ATL_TOMCAT_MAXTHREADS=100
- ATL_DB_TIMEOUT=30
- ATL_DB_POOLMAXSIZE=100
confluencedb:
image: postgres:10.5-alpine
container_name: confluencedb
restart: always
volumes:
- /etc/localtime:/etc/localtime:ro
- ./confluencedb:/var/lib/postgresql/data
environment:
- TZ=Europe/Amsterdam
- PG_TZ=Europe/Amsterdam
- POSTGRES_USER=
- POSTGRES_PASSWORD=
- POSTGRES_DB=confluence
command: -c max_connections=400
confluence:
image: atlassian/confluence-server:latest
container_name: confluence
restart: always
depends_on:
- confluencedb
ports:
- 8090:8090
volumes:
- ./confluence:/var/atlassian/application-data/confluence
environment:
- TZ=Europe/Amsterdam
- PG_TZ=Europe/Amsterdam
- ATL_PROXY_PORT=443
- ATL_PROXY_NAME=confluence
- ATL_TOMCAT_SCHEME=https
- ATL_TOMCAT_SECURE=true
- JVM_MINIMUM_MEMORY=1024m
- JVM_MAXIMUM_MEMORY=4096m
- JVM_RESERVED_CODE_CACHE_SIZE=512m
- ATL_TOMCAT_MAXTHREADS=100
- ATL_DB_TIMEOUT=30
- ATL_DB_POOLMAXSIZE=200

I'm running nginx on the host system with a reverse proxy to the Jira and Confluence instances with the following config:

server {
listen 443 ssl http2;

ssl_certificate /etc/ssl/local/jira.crt;
ssl_certificate_key /etc/ssl/local/jira.key;

ssl_session_cache shared:SSL:10m;
ssl_session_tickets off;
ssl_session_timeout 10m;

ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers ECDHE-RSA-AES256-GCM-SHA512:DHE-RSA-AES256-GCM-SHA512:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA384;

ssl_stapling on;
ssl_stapling_verify on;
ssl_ecdh_curve secp384r1;
ssl_dhparam /etc/ssl/local/dhparam.pem;

resolver 8.8.4.4 8.8.8.8 valid=300s;
resolver_timeout 5s;

add_header X-Content-Type-Options nosniff;

server_name jira;
client_max_body_size 10M;

access_log /var/hosts/jira/log/access.log;
error_log /var/hosts/jira/log/error.log;


location ~ /.well-known {
allow all;
}

location / {
proxy_pass http://127.0.0.1:8080;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Ssl-Offloaded "1";
proxy_set_header X-Forwarded-Port 443;
proxy_set_header X-Forwarded-Proto $scheme;
}

The host is Ubuntu 18.04 LTS with 16 GB RAM, 6 CPU cores and ample disk space. I've increased limits for any user to very big values. Checking free -m when the issue is appearing shows that there is > 10 GB free RAM available.

/etc/security/limits.conf

root soft nproc 65535
root hard nproc 65535
root soft nofile 65535
root hard nofile 65535

* soft nproc 65535
* hard nproc 65535
* soft nofile 65535
* hard nofile 65535

After booting up the containers everything works well. But after a while the server starts reporting Out of Memory issues and even the host system crashes because of 'cannot fork, resource unavailable' errors. There is only 1 or 2 users active in this setup at the moment. Usually once the second user starts testing the platform crashes.

Docker stats shows the following:

Screenshot 2020-08-26 at 10.47.18.png

ulimit -a

core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 1545097
max locked memory (kbytes, -l) 16384
max memory size (kbytes, -m) unlimited
open files (-n) 65535
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 65535
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited

Number of processes: (ps aux | wc -l) 112

I've been trying a lot of different options:

  • setting smaller and bigger JVM memory limits / heap sizes
  • setting the min and max to the same.
  • Expanding the number of allowed db connections to PostgreSQL
  • Increasing max number of pid's, open file limit

Errors are all in line with the following and happen in both Jira, Confluence and the db's once they start occuring:

confluencedb | 2020-08-25 21:53:11.221 CEST [1] LOG: could not fork new process for connection: Resource temporarily unavailable

jira | 25-Aug-2020 21:52:53.762 SEVERE [http-nio-8080-exec-7] org.apache.coyote.AbstractProtocol$ConnectionHandler.process Failed to complete processing of a request
jira | java.lang.OutOfMemoryError: unable to create new native thread

The host OS errors that start appearing at this time look like this:

root@server:/home/atlassian/docker/jira/log# who
bash: fork: retry: Resource temporarily unavailable
bash: fork: retry: Resource temporarily unavailable
bash: fork: retry: Resource temporarily unavailable

Hopefully someone has experience with this error and is able to help out. I've been working on this for a few days new and am completely stuck.

Thanks!

1 answer

0 votes
Gonchik Tsymzhitov
Community Leader
Community Leader
Community Leaders are connectors, ambassadors, and mentors. On the online community, they serve as thought leaders, product experts, and moderators.
February 18, 2021

Please, extend 

JVM_MAXIMUM_MEMORY

Suggest an answer

Log in or Sign up to answer