Create
cancel
Showing results for 
Search instead for 
Did you mean: 
Sign up Log in

Container 'Build' exceeded memory limit npm test

Zeeshan Shabbir
I'm New Here
I'm New Here
Those new to the Atlassian Community have posted less than three times. Give them a warm welcome!
November 14, 2022

Hi I have a very simple pipeline where I am trying to run unit test on forked version of Apache Superset. But it is always fails giving Container 'Build' exceeded memory limit error. To confirm that there isn't any memory leakage in our tests I cloned the repository of Superset from Github & ran the test but the problem persists.

I have also tried by removing Docker service from the step as well as doubling the size as mentioned in other similar questions on this forum


definitions:
services:
docker:
memory: 1000
steps:
- step: &runLint
name: Check linting
caches:
- node
script:
- cd superset-frontend
- npm ci
- npm run lint

- step: &runTest
name: Run Tests
caches:
- node
script:
- cd superset-frontend
- npm ci
- npm run test
size: 2x


pipelines:
custom: # Pipelines that are triggered manually
deploy-apps:
- variables:
- name: RELEASE_TYPE
- name: PRE_RELEASE_ID
- step: *runLint
- step: *runTest

pull-requests:
'**':
- step: *runTest

options:
size: 2x

1 answer

0 votes
Syahrul
Atlassian Team
Atlassian Team members are employees working across the company in a wide variety of roles.
November 15, 2022

Hey @Zeeshan Shabbir 

G'day.

I believe a good start to troubleshooting this issue is understanding which processes consume most of your memory in the build.

You can identify this by adding the following commands in your scripts:

- while true; do echo "Memory usage in bytes:" && cat /sys/fs/cgroup/memory/memory.memsw.usage_in_bytes; sleep 2; done & 2
- while true; do date && ps aux && echo "" && sleep 2; done &

This will print out the memory it's currently using and which processes are consuming your build. Locate when the build is freezing and failing and review which services are currently running at that time and how much memory it's currently using based on the output.

Let me know how it goes.

Cheers,
Syahrul

Zeeshan Shabbir
I'm New Here
I'm New Here
Those new to the Atlassian Community have posted less than three times. Give them a warm welcome!
November 15, 2022

Thanks, I'll check this out.

Muhammad Usman Khawar
I'm New Here
I'm New Here
Those new to the Atlassian Community have posted less than three times. Give them a warm welcome!
November 16, 2022

Hi @Syahrul

We implemented it and the pipeline again fails with the following memory consumption just before the pipeline termination

Screenshot 2022-11-16 at 5.30.29 PM.png

 

In order to deal with this issue, we onboarded a custom Linux with Docker runner, and the pipeline runs successfully. 

Docker stats show the build container memory utilisation of ~2.5 GBs at maximum. 

 

Our preference would be to make it run in our existing setup without using the overhead of a custom runner, any assistance in this regard from your side will be appreciated.

 

Please let me know, if any further elaboration is required from my end to debug this issue.

 

Best,

Usman

Like Zeeshan Shabbir likes this
Syahrul
Atlassian Team
Atlassian Team members are employees working across the company in a wide variety of roles.
November 16, 2022

Hey @Muhammad Usman Khawar 

G'day.

In the ps aux output screenshot, VSZ is the Virtual Set Size, a memory size assigned to a process during the initial execution.

The Virtual Set Size memory is simply a number of how much memory a process has available for its execution. While RSS is the memory currently used by a process. This is an actual number in kilobytes of how much RAM the current process uses.

The screenshot shows that the multiple node processes have VSZ = 2051760 KB, equal to 2 GB in binary.
The total amount of each process uses more than 8 GB of memory where the allocated memory of 7GB hence your build hit the memory exceed the limit.

The 2x option gives you 8GB of memory in the build, but 1 GB is used as reserved, so you are left with 7GB memory usage only. Your build currently uses more memory than it is allocated hence it's failing with memory exceed error.

The workaround is to limit the memory usage of your process using --max-old-space-size below the 7GB limit or reduce the amount of node process to run in a time from 7 to less than 4.

I hope this helps.

Cheers,
Syahrul

Suggest an answer

Log in or Sign up to answer
DEPLOYMENT TYPE
CLOUD
TAGS
AUG Leaders

Atlassian Community Events