Create
cancel
Showing results for 
Search instead for 
Did you mean: 
Sign up Log in

yarn build - error Command failed with exit code 137 - Bitbucket Pipelines out of memory - Using max

Drew Gallagher March 10, 2023

Our react app is configured to build and deploy using the CRA scripts and Bitbucket Pipelines.

Most of our builds are failing from running yarn build with the following error:

error Command failed with exit code 137.

This is an out of memory error.

We tried setting GENERATE_SOURCEMAP=false as a deployment env variable but that did not fix the issue https://create-react-app.dev/docs/advanced-configuration/.

We also tried setting the max memory avialable for a step by running the following:

node --max-old-space-size=8192 scripts/build.js

Increasing to max memory did not resolve the issue.

This is blocking our development and we aren't sure what to do to resolve the issue.

We could move to a new CI/CD service but that is a lot more work than desired.

Are there other solutions that could solve this problem?

 

image: node:14

definitions:

steps:

- step: &test

name: Test

script:

- yarn

- yarn test --detectOpenHandles --forceExit --changedSince $BITBUCKET_BRANCH

- step: &build

name: Build

size: 2x

script:

- yarn

- NODE_ENV=${BUILD_ENV} yarn build

artifacts:

- build/**

- step: &deploy_s3

name: Deploy to S3

script:

- pipe: atlassian/aws-s3-deploy:0.3.8

variables:

AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID

AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY

AWS_DEFAULT_REGION: $AWS_DEFAULT_REGION

S3_BUCKET: $S3_BUCKET

LOCAL_PATH: "./build/"

ACL: 'public-read'

- step: &auto_merge_down

name: Auto Merge Down

script:

- ./autoMerge.sh stage || true

- ./autoMerge.sh dev || true

caches:

jest: /tmp/jest_*

node-dev: ./node_modules

node-stage: ./node_modules

node-release: ./node_modules

node-prod: ./node_modules

pipelines:

branches:

dev:

- parallel:

fail-fast: true

steps:

- step:

caches:

- node-dev

- jest

<<: *test

- step:

caches:

- node-dev

<<: *build

deployment: Dev Env

- step:

<<: *deploy_s3

deployment: Dev

stage:

- parallel:

fail-fast: true

steps:

- step:

caches:

- node-stage

- jest

<<: *test

- step:

caches:

- node-stage

<<: *build

deployment: Staging Env

- step:

<<: *deploy_s3

deployment: Staging

prod:

- parallel:

fail-fast: true

steps:

- step:

caches:

- node-prod

- jest

<<: *test

- step:

caches:

- node-prod

<<: *build

deployment: Production Env

- parallel:

steps:

- step:

<<: *deploy_s3

deployment: Production

- step:

<<: *auto_merge_down

2 answers

1 accepted

0 votes
Answer accepted
Drew Gallagher March 20, 2023

Turns out the terser-webpack-plugin package was running max workers for jest tests during our yarn build step causing the out of memory error

https://www.npmjs.com/package//terser-webpack-plugin 

By removing that plugin from our package.json, it no longer fails the build and the jest workers are no longer spawned during the build.

Not sure why terser-webpack-plugin is running tests during the build step.

This seems incorrect and is causing our pipeline and likely others to go out of memory.

I created issue with them and attached the logs.

https://github.com/webpack-contrib/terser-webpack-plugin/issues/552

1 vote
Theodora Boudale
Atlassian Team
Atlassian Team members are employees working across the company in a wide variety of roles.
March 13, 2023

Hi @Drew Gallagher,

Steps with size: 2x have 8192 MB memory in total. The build container is given 1024 MB of the total memory, which covers your build process and some Pipelines overheads (agent container, logging, etc). So, the remaining available memory for the step is 7128 MB.

This is mentioned in the following documentation page:

I would suggest reducing --max-old-space-size to 7128 MB or less.

You can use the following commands at the beginning of the script in your bitbucket-pipelines.yml file in order to print memory usage throughout the build and see what is consuming memory:

- while true; do ps -aux && sleep 5; done &
- while true; do echo "Memory usage in megabytes:" && echo $((`cat /sys/fs/cgroup/memory/memory.memsw.usage_in_bytes | awk '{print $1}'`/1048576)) && sleep 0.1; done &

If a certain step needs more memory and if it is not possible to run it with the available memory in Pipelines, you can look into using one of our runners:

With a runner, you can have certain steps run on one of your own servers and you can configure up to 32GB (8x) of memory for these steps. You won’t be charged for the build minutes used by your self-hosted runners and you will still be able to view the build logs on the Pipelines page of the repository.

We have a feature request for the ability to increase memory in Pipelines that run in our own infrastructure to more than 8GB. You can vote for it and leave a comment there to express your interest: https://jira.atlassian.com/browse/BCLOUD-17260

Please feel free to let me know if you have any questions.

Kind regards,
Theodora

Drew Gallagher March 13, 2023

@Theodora Boudale thanks for the help! 

I reduced the node build size to 7128 MB as well as added the logging you mentioned. 

Here is a link to the logs that were produced from the pipeline run 

https://drive.google.com/file/d/1yJx4pHxHEPqgC5ORZJtV7ehuAbQI6VpU/view?usp=sharing

I'm confused why the jest workers are still running even though they are in a different step running in parallel, and have already completed.

The jest test runs are in a different step than the build so I would think they would be separate. 

Could this be a bug within bitbucket pipelines or something we have to manually terminate with jest?

Drew Gallagher March 13, 2023

@Theodora Boudale after adding

--forceExit

to the jest test run, the pipeline passed and was able to clear enough memory. 

Here is the logs of that run. 

Let me know if you think that seems to be the cause. 

https://drive.google.com/file/d/1Tr4CAYDG6yrykUf9BnV2Qf0RIiWlkpkq/view?usp=sharing 

Theodora Boudale
Atlassian Team
Atlassian Team members are employees working across the company in a wide variety of roles.
March 14, 2023

Hi @Drew Gallagher,

Based on the build logs you shared, it looks like the jest workers are consuming a lot of memory. You said that the pipeline is passing now, so I assume that the build is successful after you used --forceExit ?

If you ever get memory issues on steps that are using jest workers, I would suggest checking the following KB for a suggestion on how to mitigate this. Specifically: Scenario 2.2: Builds using Jest Test Framework are slow or frequently hang (based on the Pipeline build minutes consumption) or failed with Container “Build” exceeded memory limit error

Kind regards,
Theodora

Drew Gallagher March 14, 2023

Hey @Theodora Boudale ,

 

It passed once, but unfortunately after that it did not pass. 

Even with --forceExit its still running jest workers during the build phase, which is odd considering the test and build steps are separate. 

Do you know why this might be happening?


I will check out that article

Drew Gallagher March 14, 2023

@Theodora Boudale it looks like it's not the jest tests our team is running, but rather another node module is running that is creating the workers. 

I added 

--maxWorkers=2 

to both the build and test steps and it still failed

Here is the latest log

https://drive.google.com/file/d/1qOHxu4j2NOOkrbxS4TZiz7WXj4HzotgO/view?usp=sharing

Here is a log of the plugin using the jest workers in the build

root        6554  126 11.1 4456796 3625284 ?     Rl   14:24   4:37 /usr/local/bin/node --max-old-space-size=7128 scripts/build.js --maxWorkers=2

root 6627 59.9 4.0 1849796 1311736 ? Sl 14:24 2:11 /usr/local/bin/node --max-old-space-size=2048 /opt/atlassian/pipelines/agent/build/node_modules/fork-ts-checker-webpack-plugin/lib/service.js

root 14031 3.2 0.1 599048 62116 ? Sl 14:27 0:00 /usr/local/bin/node --max-old-space-size=7128 /opt/atlassian/pipelines/agent/build/node_modules/terser-webpack-plugin/node_modules/jest-worker/build/workers/processChild.js

root 14047 4.1 0.1 596776 55456 ? Sl 14:27 0:01 /usr/local/bin/node --max-old-space-size=7128 /opt/atlassian/pipelines/agent/build/node_modules/terser-webpack-plugin/node_modules/jest-worker/build/workers/processChild.js

root 14062 6.0 0.2 603204 68060 ? Sl 14:27 0:01 /usr/local/bin/node --max-old-space-size=7128 /opt/atlassian/pipelines/agent/build/node_modules/terser-webpack-plugin/node_modules/jest-worker/build/workers/processChild.js

root 14077 3.8 0.1 600604 54836 ? Sl 14:27 0:00 /usr/local/bin/node --max-old-space-size=7128 /opt/atlassian/pipelines/agent/build/node_modules/terser-webpack-plugin/node_modules/jest-worker/build/workers/processChild.js

root 14088 3.6 0.1 601188 62968 ? Sl 14:27 0:00 /usr/local/bin/node --max-old-space-size=7128 /opt/atlassian/pipelines/agent/build/node_modules/terser-webpack-plugin/node_modules/jest-worker/build/workers/processChild.js

root 14103 3.2 0.1 594004 54764 ? Sl 14:27 0:00 /usr/local/bin/node --max-old-space-size=7128 /opt/atlassian/pipelines/agent/build/node_modules/terser-webpack-plugin/node_modules/jest-worker/build/workers/processChild.js

root 14118 0.7 0.1 586000 35152 ? Sl 14:27 0:00 /usr/local/bin/node --max-old-space-size=7128 /opt/atlassian/pipelines/agent/build/node_modules/terser-webpack-plugin/node_modules/jest-worker/build/workers/processChild.js

root 15016 0.0 0.0 2296 752 ? S 14:27 0:00 sleep 0.1

root 15017 0.0 0.0 7640 2652 ? R 14:27 0:00 ps -aux
Theodora Boudale
Atlassian Team
Atlassian Team members are employees working across the company in a wide variety of roles.
March 15, 2023

Hi Drew,

The latest output you posted shows that there are still 7 workers running so it looks that the --maxWorkers=2 you added is not applied.

I would suggest the following steps:

1) First, look into any configuration files of your source code for any default values for the number of workers that may be overriding the value you set. Adjust these files and see if the build is then running with fewer workers and if the issue is resolved.

2) If the issue is not resolved with step 1), try reducing --max-old-space-size to a lower value.

Kind regards,
Theodora

Drew Gallagher March 15, 2023

@Theodora Boudale the maxWorkers shouldn't apply in the build step because we are not explicitly running any tests there. 

The test step is using jest so that would only apply there. 

I think another node_module called terser-webpack-plugin is running tests in the build that we don't have access to manipulate the worker count.

That's because that is a separate npm package.

How can we resolve this?

Theodora Boudale
Atlassian Team
Atlassian Team members are employees working across the company in a wide variety of roles.
March 15, 2023

Hi Drew,

I am not familiar with the plugin, I would suggest finding its official documentation and see if there is in there an issue tracker or a forum for questions related to the plugin. If there is, ask there if it is possible to adjust the number of the jest workers it is using with a configuration file.

If this isn't possible, you can try reducing the value of --max-old-space-size.

If the step cannot be configured to use less than the available memory, you can look into using a self-hosted runner for that specific step (I provided more details about this in my first reply).

Kind regards,
Theodora

Drew Gallagher March 20, 2023

Hey @Theodora Boudale 

Turns out the terser-webpack-plugin package was running max workers for jest tests during our yarn build step causing the out of memory error

https://www.npmjs.com/package//terser-webpack-plugin 

By removing that plugin from our package.json, it no longer fails the build and the jest workers are no longer spawned during the build.

Not sure why terser-webpack-plugin is running tests during the build step.

This seems incorrect and is causing our pipeline and likely others to go out of memory.

I created issue with them and attached the logs.

https://github.com/webpack-contrib/terser-webpack-plugin/issues/552

Like Theodora Boudale likes this
Theodora Boudale
Atlassian Team
Atlassian Team members are employees working across the company in a wide variety of roles.
March 21, 2023

Thank you for the update Drew, it's good to hear that there are no memory issues after the plugin is removed. Thank you also for sharing the issue you created in the plugin's issue tracker.

Please feel free to reach out if you ever need anything else.

Suggest an answer

Log in or Sign up to answer
DEPLOYMENT TYPE
CLOUD
PERMISSIONS LEVEL
Product Admin
TAGS
AUG Leaders

Atlassian Community Events