I am currently trying to migrate the bitbucket pipelines to be executed on a local server. I realise that my pipeline has 3 main parallel executions. Each parallel execution has 17+ steps that should execute in parallel.
It seems that it is required to have as many self hosted runners as many parallel steps I need.
The problem is that once I run 25 runners in parallel each step in the parallel section takes much longer.
A step running on Bitbucket pipelines cloud takes 6 minutes., while on my local server now takes 17-20 minutes.
I am using the linux docker runner.
Host server has 128 gb ram, 20 core CPU.
The docker environment has resources scaled to enable usage of the 20 CPUs and 96GB of memory.
The host server runs 25 runners.
Hi Tibor,
A runner can indeed run only one step at a time. If you want to execute steps in parallel, multiple runners need to be set up.
We have a feature request to allow a runner to execute multiple steps: https://jira.atlassian.com/browse/BCLOUD-21383
You can add your vote and comment to it to express your interest.
Regarding the build time, this is something that depends on the host system's resources. If you have 25 runners in one server, then the system's resources are shared among the runners.
Kind regards,
Theodora
Thank you for the reply!
The host machine is a mac studio with 20 cpus and 128Gb of ram connected to a network with 10 gbit via ethernet cable.
The host docker has access to all these resources, yet I see a sharp decline in performance when using more than 2 runners in parallel, this degradation is more on the network side (git clone, pulling images all take much more as opposed to sequential steps).
My feeling is that the runners are not using the full power of the host machine. Is there a flag that I need to set somewhere?
Best regards,
Tibor
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Hi Tibor,
There is no flag to set on the runners that affects performance. Mac should have tools where you can see CPU and memory usage and if that is close to the limit when multiple builds are running. I believe that Docker Desktop for MacOS runs in a virtual box, so I'm not sure if that is limiting the performance.
Please keep in mind that you are using a platform we do not officially support. For Linux Docker runners we only support the following:
Linux with x64 architecture and a Linux kernel v4.0.0+. The runners have been tested on the following Linux distributions:
- Ubuntu 22.04
- Debian 11
- Centos 7
- Fedora 36
- Oracle Linux 8.6
- Amazon Linux 2
If you have a Linux host available, my suggestion would be to try the runners on that instance.
Kind regards,
Theodora
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Thanks for the reply! Indeed upon investigating and experimenting with different ways to speed things up I came to the same conclusion that the virtualization on Docker Desktop for Mac was the problem.
I enabled Virtio FS (https://www.docker.com/blog/speed-boost-achievement-unlocked-on-docker-desktop-4-6-for-mac/) and this indeed made a big big difference, but at the same time introduced a new problem:
- the pipeline steps create and modify files in the mapped volumes using different docker container (build setup, build, build teardown) which throws file system permission errors
I will try to fix the permission problems or alternatively will port to a Linux host.
Best regards and thank you for the help!
Tibor
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Hi Tibor,
Thank you for the update and you are very welcome!
Please feel free to reach out if you ever need anything else!
Kind regards,
Theodora
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.