I have a self hosted runner in an EKS cluster. The runner is online in Bitbucket and can run jobs as one would expect. However, when I try to run our compilation job, it fails due to lack of memory. Now, your first instict would be to say that the node in the k8s cluster does not have enough memory. However, this same compilation ran successfully in Jenkins on the exact same node. In fact, we had to move this job to this larger node type because of memory failures previously and it uses nearly all 8GB available.
As best as I can figure, when my container runs in the dind container, within the runner pod, the amount of memory allocated to my container is the default 2GB for docker. When I define a service in my bitbucket-pipelines.yml to tell docker to use up to 6GB of memory per step, I still get the memory issue. 6GB is the limit Bitbucket will let me use before it complains that there isn't enough memory for the other services.
Finally my question is this: How can I allocate more memory to my build process?
Thanks.
Hello @Ian Panzer and welcome to the Community!
The way that pipeline memories are allocated is that regular steps receive 4GB of memory in total.
This memory can be distributed to the build container and any service containers defined in the step, with the caveat that at least 1GB of the total available for the step must be reserved for the build container. The build container is where the script commands you have added to your step are executed. Docker commands are executed in the docker service container.
Using that as an example, in a regular step (4GB to be distributed) where the most memory-consuming command is a docker build (which is executed in the docker service), you can define the docker service to use up to 3GB of memory, while the remaining 1GB of memory (4GB step - 3GB assigned to docker service = 1GB ) is left for the build container.
Similarly, you can distribute the memory according to your use case when using large steps. Builds running in Atlassian infrastructure can be configured up to size: 2x which will make the step have 8GB of memory to be distributed, so you could configure the docker service up to 7GB.
Steps that run in self-hosted runners can be configured to 2x, 4x, or 8x for respectively 8GB, 16GB, and 32GB of memory available in the step.
Following is an example YAML configuration for a docker service configured to use 7GB of a large (size: 2x step)
pipelines:
default:
- step:
size: 2x
script:
- echo "This step gets 8GB of memory!"
services:
- docker
definitions:
services:
docker:
memory: 7128 # Assign 7GB (out of the step available memory) to the docker service
Hope that helps to clarify your question! Following are some articles related to memory allocation that might also be of help :
Thank you, @Ian Panzer !
Patrik S
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.