Create
cancel
Showing results for 
Search instead for 
Did you mean: 
Sign up Log in
Celebration

Earn badges and make progress

You're on your way to the next level! Join the Kudos program to earn points and save your progress.

Deleted user Avatar
Deleted user

Level 1: Seed

25 / 150 points

Next: Root

Avatar

1 badge earned

Collect

Participate in fun challenges

Challenges come and go, but your rewards stay with you. Do more to earn more!

Challenges
Coins

Gift kudos to your peers

What goes around comes around! Share the love by gifting kudos to your peers.

Recognition
Ribbon

Rise up in the ranks

Keep earning points to reach the top of the leaderboard. It resets every quarter so you always have a chance!

Leaderboard

Self-hosted runner on kubernetes memory failure

Ian Panzer
I'm New Here
I'm New Here
Those new to the Atlassian Community have posted less than three times. Give them a warm welcome!
Nov 10, 2023

I have a self hosted runner in an EKS cluster. The runner is online in Bitbucket and can run jobs as one would expect. However, when I try to run our compilation job, it fails due to lack of memory. Now, your first instict would be to say that the node in the k8s cluster does not have enough memory. However, this same compilation ran successfully in Jenkins on the exact same node. In fact, we had to move this job to this larger node type because of memory failures previously and it uses nearly all 8GB available.

As best as I can figure, when my container runs in the dind container, within the runner pod, the amount of memory allocated to my container is the default 2GB for docker. When I define a service in my bitbucket-pipelines.yml to tell docker to use up to 6GB of memory per step, I still get the memory issue. 6GB is the limit Bitbucket will let me use before it complains that there isn't enough memory for the other services.

Finally my question is this: How can I allocate more memory to my build process?

Thanks.

1 answer

0 votes
Patrik S
Atlassian Team
Atlassian Team members are employees working across the company in a wide variety of roles.
Nov 14, 2023

Hello @Ian Panzer and welcome to the Community!

The way that pipeline memories are allocated is that regular steps receive 4GB of memory in total.

This memory can be distributed to the build container and any service containers defined in the step, with the caveat that at least 1GB of the total available for the step must be reserved for the build container. The build container is where the script commands you have added to your step are executed. Docker commands are executed in the docker service container.

Using that as an example, in a regular step (4GB to be distributed) where the most memory-consuming command is a docker build (which is executed in the docker service), you can define the docker service to use up to 3GB of memory, while the remaining 1GB of memory (4GB step - 3GB assigned to docker service = 1GB ) is left for the build container.

Similarly, you can distribute the memory according to your use case when using large steps. Builds running in Atlassian infrastructure can be configured up to size: 2x which will make the step have 8GB of memory to be distributed, so you could configure the docker service up to 7GB.

Steps that run in self-hosted runners can be configured to 2x, 4x, or 8x for respectively 8GB, 16GB, and 32GB of memory available in the step.

Following is an example YAML configuration for a docker service configured to use 7GB of a large (size: 2x step)

pipelines:
  default:
    - step:
        size: 2x
        script:
          - echo "This step gets 8GB of memory!"
        services:
          - docker

definitions:
  services:
    docker:
      memory: 7128 # Assign 7GB (out of the step available memory) to the docker service

 Hope that helps to clarify your question! Following are some articles related to memory allocation that might also be of help : 

Thank you, @Ian Panzer !

Patrik S

Suggest an answer

Log in or Sign up to answer
DEPLOYMENT TYPE
CLOUD
TAGS
AUG Leaders

Atlassian Community Events