Create
cancel
Showing results for 
Search instead for 
Did you mean: 
Sign up Log in

nodejs 16 yarn build failed

Kyrylo Zlachevsky
I'm New Here
I'm New Here
Those new to the Atlassian Community have posted less than three times. Give them a warm welcome!
June 30, 2023

The project successfully builds locally.
When using Bitbucket pipelines the build gets following error:

+ yarn build:test
yarn run v1.22.19
$ env-cmd -f .env.test npm run build
> telemed-ts@0.1.0 build
> react-scripts build
Creating an optimized production build...
The build failed because the process exited too early. This probably means the system ran out of memory or someone called `kill -9` on the process.
error Command failed with exit code 1.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command

I have added memory to the docker container from this thread and got another error:

+ yarn build:test
yarn run v1.22.19
$ env-cmd -f .env.test npm run build
> telemed-ts@0.1.0 build
> react-scripts build
Creating an optimized production build...
/opt/atlassian/pipelines/agent/build/node_modules/react-scripts/scripts/build.js:19
throw err;
^
RpcIpcMessagePortClosedError: Process 147 exited [SIGKILL].
at /opt/atlassian/pipelines/agent/build/node_modules/fork-ts-checker-webpack-plugin/lib/rpc/rpc-ipc/RpcIpcMessagePort.js:19:23
at Generator.next (<anonymous>)
at /opt/atlassian/pipelines/agent/build/node_modules/fork-ts-checker-webpack-plugin/lib/rpc/rpc-ipc/RpcIpcMessagePort.js:8:71
at new Promise (<anonymous>)
at __awaiter (/opt/atlassian/pipelines/agent/build/node_modules/fork-ts-checker-webpack-plugin/lib/rpc/rpc-ipc/RpcIpcMessagePort.js:4:12)
at ChildProcess.handleExit (/opt/atlassian/pipelines/agent/build/node_modules/fork-ts-checker-webpack-plugin/lib/rpc/rpc-ipc/RpcIpcMessagePort.js:18:42)
at ChildProcess.emit (node:events:513:28)
at Process.ChildProcess._handle.onexit (node:internal/child_process:293:12) {
code: null,
signal: 'SIGKILL'
}
error Command failed with exit code 1.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.

But when I build the project on my local machine, yarn build:test works with any problem:

* Executing task: yarn run build:test

yarn run v1.22.19
$ env-cmd -f .env.test npm run build

> telemed-ts@0.1.0 build
> react-scripts build

Creating an optimized production build...
Compiled successfully.

File sizes after gzip:

1.23 MB (+3 B) build/static/js/main.98faeca5.js
1.83 kB build/static/css/main.5de7ade5.css

The bundle size is significantly larger than recommended.
Consider reducing it with code splitting: https://goo.gl/9VhYWB
You can also analyze the project dependencies: https://goo.gl/LeUzfb

The project was built assuming it is hosted at /.
You can control this with the homepage field in your package.json.

The build folder is ready to be deployed.
You may serve it with a static server:

yarn global add serve
serve -s build

Find out more about deployment here:

https://cra.link/deployment

Done in 82.20s.

 Please help!

1 answer

0 votes
Theodora Boudale
Atlassian Team
Atlassian Team members are employees working across the company in a wide variety of roles.
July 3, 2023

Hi Kyrylo and welcome to the community!

The 'SIGKILL' signal makes me think that the build may still be failing because of memory issues.

The following apply regarding memory in Pipelines builds:

  • Regular steps have 4096 MB of memory in total, large build steps (which you can define using size: 2x) have 8192 MB in total.
  • The build container is given 1024 MB of the total memory, which covers your build process and some Pipelines overheads (agent container, logging, etc).
  • For builds that do not use any services, the remaining memory is 3072/7128 MB for 1x/2x steps respectively.
  • If your step is using a service, service containers get 1024 MB memory by default, but can be configured to use between 128 MB and the step maximum (3072/7128 MB).

I would suggest adding the following commands in your bitbucket-pipelines.yml file, at the beginning of the step that fails:

- while true; do date && ps -aux && sleep 5 && echo ""; done &
- while true; do date && echo "Memory usage in megabytes:" && echo $((`cat /sys/fs/cgroup/memory/memory.memsw.usage_in_bytes | awk '{print $1}'`/1048576)) && echo "" && sleep 5; done &

These commands will print in the build log details about memory usage during the build and they can help you figure out if the memory usage is close to the limit before the build fails and which processes consume a lot of memory.


If the build is running out of memory for a step with 2x, it will need to be configured to use less memory than that in order to run it in Pipelines.

If the build cannot be configured to use less than 8 GB of memory, you can look into using Runners in one of your servers and run this step on a runner:

Runners allow you to run builds in Pipelines on your own infrastructure, and you won’t be charged for the build minutes used by your self-hosted runners. With a runner, it is possible to configure up to 32GB (8x) of memory to run your builds, if the host machine has that memory.

Kind regards,
Theodora

Suggest an answer

Log in or Sign up to answer
DEPLOYMENT TYPE
CLOUD
TAGS
AUG Leaders

Atlassian Community Events