I am trying to setup a runner on my AWS EKS kubernetes cluster and I could see that runner requires docker in docker to initiate the integration, which internally requires docker socket (/var/run/docker.sock). However from kuberenets perspective docker socket is not longer supported from 1.20 version onwards, hence i was unable to run the docker in docker container. This seems to be a roadblock for me to setup the runner. Without the runner running on my dedicated VPC, I cant build any of my codes. Is there any alternative to fix this issue and run on EKS?
I'm running the Bitbucket Pipelines Runner in our Rancher based Kubernetes cluster. It registers well, and I am able to run basic Pipelines and everything is going fine until some steps of the Pipeline has to download a large file. (In our case it was npm install)
I tried to search everywhere and I found a possible issue with network MTU.
The MTU size on the server is 1500, but the Kubernetes Network Provider is using 1450. I tried to update DIND to use the following settings:
Now if I attach to the Bitbucket Runner container (atlassian/bitbucket-pipelines-runner), I can see the following setting in `docker network inspect bridge`:
./docker network inspect bridge | grep mtu "com.docker.network.driver.mtu": "1300"
Sadly if I attach from here to a pipeline container (or the atlassian/bitbucket-pipelines-docker-daemon container) I will still see the original:
docker network inspect bridge | grep mtu "com.docker.network.driver.mtu": "1500"
Is it possible somehow to change the MTU for the bitbucket pipelines docker daemon?
A runner spins up a dind container as part of a step setup. In your case, a default dind image was used with a default configuration. In order to use a custom docker-in-docker image, please check this documentation
I've also noticed an error in your pipeline configuration that you shared before:
1. Pipelines configuration doesn't support args property. Try to publish a custom docker image using a Dockerfile:
FROM docker:20.10.7-dind ENTRYPOINT [ "sh", "-c", "dockerd-entrypoint.sh $DOCKER_OPTS" ]
2. If you want to use a custom name for a docker service you should add a type: docker parameter
- name: docker-in-docker type: docker image: my-custom-dind-image # see a Dockerfile example above variables: DOCKER_OPTS: "--mtu=1300"
Bitbucket Pipeline Runners are now in GA. You can now run multiple runners on the same machine.
As for the short-term roadmap, we are currently working on Windows Runners and then we will start work on non-docker/macOS runners. Please let me know if you have any further questions.
Atlassian Team members are employees working across the company in a wide variety of roles.
November 8, 2021 edited
@Pavel Shurikov The team will work on macOS runners after shipping Windows Runners. macOS runners are non-containerized and it should be possible to use them on a Linux machine. The current plan is to ship macOS runners in the first quarter of CY22. You can follow this ticket to follow the update.
@Justin Thomas Thanks for a speedy and helpful response! I'll keep an eye on the linked ticket and for now just muck around with docker-in-docker for our runners in EKS.
You mentioned some time ago that: "one can currently start a runner in a Kubernetes cluster, auto-scaling is not yet supported."
Indeed we can get a Bitbucket runner to work just fine in our Kubernetes cluster. However, the fact that there is no auto-scaling is a roadblock for us.
Is auto-scaling in the roadmap of future improvements for self hosted runners being deployed in kubernetes?
If yes, do you have an idea when this feature will be available.
Atlassian Team members are employees working across the company in a wide variety of roles.
February 17, 2022 edited
@Marcos De Melo Da Silva Yes, auto-scaling is on the roadmap. We are aiming to release it in Q2 of CY2022. We will most probably be doing an early release next month, if you are interested please follow this ticket.
@Jeppe Rask - Yes. I found this out the hard way. I had to drastically re-factor my containers to reduce the number of layers they have in order to keep the size down under 1GB.
@plussier Thanks for the info! But that really sucks. Why can't it just rely on the runner-machine keeping the docker image cached? We probably won't go with bitbucket pipelines then.
@Jeppe - I do NOT know if you're running a private runner of your own if that limit is customizable. when relying on the Atlassian runners you are definitely limited to 1GB images...
Bitbucket pipelines IMO, is better than Jenkins. But that's it. If I had my choice, and budget, I'd move everything over to GitLab. Their system is WAAAAY better than Atlassian's from a feature perspective as well as just general usability and integration. GitLab is also significantly more responsive to feature requests than Atlassian is as well, IMO.
If you've got the choice to go with something better than BB pipelines, I recommend you do.
113 comments