Hello, community!
I've been trying to solve this on my own, but after seeing how responsive and pleasant everybody is on here, I've decided to punt and hope that this is a procedure that somebody has already documented.
I have a few bare metal servers for heavy builds. Kernel compilation, ARM32v7, etc.
Getting the runner up and going is about the easiest thing in the world.
I would like to take full advantage of the resources available and was having trouble grokking the documentation.
My pipeline file is:
```
```
This configuration is working, but I arrived at it purely by trial and error. When using a larger size and/or more memory my build would randomly fail with a "docker timeout" error; though it would make it through part of the process, usually dying on a git pull.
My questions are:
How do the various settings interrelate? What affect does changing the "size" of a particular step have on the memory specified in the service definition? If I were to remove these settings and specify resources on the runner instantiation command (docker --whatevs) would that affect the size and memory specifications?
Does the "memory" field in the service definition refer to the amount of RAM available? I have 128GB of RAM and 4 Xeon CPUs available and I'd like to be able to use as much as possible for this part of the build.
The docs have been confusing and I was hoping to get the information I need to wrap my head around this and I think I'm missing a core concept.
If there is documentation over and above what I've found at https://support.atlassian.com/bitbucket-cloud/docs/set-up-runners-for-linux-shell/ (and others) then I'd love to have a link to bookmark.
Thank you all!
Hello @Sam Koepnick and thank you for reaching out to Community!
In Bitbucket Pipelines, there are two different types of Linux runners :
With that in mind, the size and memory configurations in the YML file are only applicable if you are using the Linux docker runner.
The size attribute controls how much memory (RAM) is allocated to a step. A regular step (size:1x) received 4GB of memory, while a size: 2x step will receive 8GB, and so on.
This memory is then distributed to all the services that you have defined in the step (as they run in separate containers). The memory attributes you configure in a service will dictate how much of the step's available memory will be taken by that service - by default services are allocated 1GB of memory. The remaining step's memory will be allocated to the build container (where your step's script runs).
You can find more details and examples of how memory allocation works in our Service memory limits documentation.
Modifying the runner's instantiation attributes will only affect the runner's container, but not the build containers, as the runners will spin up child containers when running your build, and the arguments will not be passed over to these child containers.
Currently, when using Linux docker self-hosted runners, you can configure a step up to size: 8x, which will make 32GB of memory available to that step.
If you want to use more resources from the host machine, you could try using Linux shell runners, as they will run directly in a terminal session - with no containers involved- so they will have access to the full resources of the host.
Hope that helps! If you have any questions, let us know :)
Thank you, @Sam Koepnick !
Patrik S
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.