Hi,
I'd like to ask you about my memory overflow problem.
I tried something but it didn't help. Sometimes it passes and sometimes it doesn't. This is a mystery to me.
Can you please give me some advice?
Thank you very much.
options:
max-time: 30
docker: true
size: 2x
clone:
depth: full
pipelines:
default:
- step:
name: Build client app
image: node:14-slim
caches:
- npm
script:
- cd client/app
- npm ci
- npm run build
- npm run lint
- step:
name: Build server app
image: node:14-slim
caches:
- npm
script:
- cd server/app
- npm ci
- npm run build
definitions:
caches:
npm: $HOME/.npm
services:
docker:
memory: 6144
Hi, @frantisek_lorenc, welcome to the community!
I've analyzed your build setup and noticed that you are currently allocating 6 GB for your docker service, which means you only have 2 GB for your build container. This seems to be the reason why your build is intermittently failing due to memory issues (depending on changes you are making on your code, this can influence the amount of memory your build will need to use and throw you this error).
Just to give you a background on how memory is allocated in the build container, regular steps have 4096 MB of memory in total, large build steps (which you can define using size: 2x) have 8192 MB in total. Service containers (like the docker service ) get 1024 MB memory by default, but can be configured to use between 128 MB and the step maximum (3072/7128 MB).
Currently, on your bitbucket-pipelines.yml file, you have set the pipeline to use size: 2x, as per the following sections from your yml file:
options:
max-time: 30
docker: true
size: 2x
Since this is configured as size:2x, this means that each step will be provided with 8192 MB of memory, of which 6144MB is being reserved for the docker service, leaving 2048 MB for the build container that runs your script commands. Since you are reported that sometimes it passes and sometimes it doesn't, it looks this memory is not sufficient for the build to complete, and hence the error you are getting.
That being said, I would suggest you decrease the memory being allocated to the docker service by using the following definition on your YML file:
docker:
memory: 5120
This will allocate 5120 MB to the docker service. You can test it with 5120, and if it fails you can keep decreasing this value until your build is stable.
After adjusting your YML file, you can try running the build again and check if the issue is fixed.
You can also refer to the following documentation for more details about Pipeline memory limits:
Do let us know if you run into any issues trying the suggestions above, or if you have any other questions.
Kind regards,
Caroline
Thank you @Caroline R for the helpful answer. Following up, if I specify docker as a service in a step, then what is the docker service responsible for running as opposed to the build container?
definitions:
services:
docker:
memory: 3072
steps:
- build_and_push: &build_and_push
name: Build and push
image: amazon/aws-cli
size: 2x
script:
- echo “Log in to AWS ECR"
...
- echo "Build the SchedulerUI docker image"
- docker build -t ${IMAGE} .
- docker tag ${IMAGE} ${IMAGE}:${VERSION}
services:
- docker
caches:
- docker
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.