Is there an ETA for the different parts currently in progress? I am currently waiting for the removal of the `max-time` restriction. Is there somewhere I can follow the status of this?
As a Jenkins lover with a huge and long time of use, I was really surprised with this announcement on bitbucket. (specially I was going to move to GitLab for such feature)
I know it is still on beta, and there are more work to do, but would it possible to have a road map page as we used to have on Atlassian for "when" and "what" is coming.
So far I tested the current release and it is really easy and straight forward setup, but there are some points still would like to know.
You mentioned some points in your early post about having multiple runner on same host and GPU access, but during my test I found there is no parallel steps and all the parallel steps that are using the self host are queued on sequence while the steps that are running on the cloud not, and same for multiple repositories that are updated at once, is this also going to change?
I also tried to remove the volume and restart again but still, the issue is appearing.
I suspect that the clone container is not working correctly with this entrypoint: `"exit $( (/usr/bin/mkfifo /tmp/tmp/clone_result && /bin/cat /tmp/tmp/clone_result) || /bin/echo 1)"`. Not sure though.
I also tried to map the `/tmp` of the host to container's tmp but then I get the following error for npm:
@lassian Thanks for finally implementing this feature. I am really looking forward to it. One question: How far is the progress regarding the ssh keys. Seems like currently the SSH-KEYS configured in the repo are not passed to the runner instance. Is this correct? On the posted roadmap it says "ssh-keys in may". Whats the status?
Not mounting /tmp into docker doesn't seem to make any difference. Moreover I see that when the job is launched, it seems to run a few more containers:
CONTAINER ID IMAGE COMMAND CREATED STATUS PO RTS NAMES e34134bb2f12 docker-public.packages.atlassian.com/sox/atlassian/bitbucket-pipelines-auth-proxy:prod-stable "/bin/sh /usr/local/…" About a minute ago Up About a minute (healthy) system-auth-proxy f0fe3a7fed59 docker-public.packages.atlassian.com/sox/atlassian/bitbucket-pipelines-dvcs-tools:prod-stable "/bin/sh -c 'exit $(…" About a minute ago Up About a minute clone 41d80ab0543c google/pause:latest "/pause" About a minute ago Up About a minute pause 0c79cfe21b71 docker-public.packages.atlassian.com/sox/atlassian/bitbucket-pipelines-runner "/bin/sh -c -x ./ent…" 16 minutes ago Up 16 minutes rol-dev-runner
rol-dev-runner is my own runner and rest 3 are spawned by bitbucket-runner. Doing a ps on them I see this:
Not mounting /tmp into docker doesn't seem to make any difference. Moreover I see that when the job is launched, it seems to run a few more containers:
CONTAINER ID IMAGE COMMAND CREATED STATUS PO RTS NAMES e34134bb2f12 docker-public.packages.atlassian.com/sox/atlassian/bitbucket-pipelines-auth-proxy:prod-stable "/bin/sh /usr/local/…" About a minute ago Up About a minute (healthy) system-auth-proxy f0fe3a7fed59 docker-public.packages.atlassian.com/sox/atlassian/bitbucket-pipelines-dvcs-tools:prod-stable "/bin/sh -c 'exit $(…" About a minute ago Up About a minute clone 41d80ab0543c google/pause:latest "/pause" About a minute ago Up About a minute pause 0c79cfe21b71 docker-public.packages.atlassian.com/sox/atlassian/bitbucket-pipelines-runner "/bin/sh -c -x ./ent…" 16 minutes ago Up 16 minutes rol-dev-runner
rol-dev-runner is my own runner and rest 3 are spawned by bitbucket-runner. Doing a ps on them I see this:
My self hosted is stuck. On doing ps I get the following output:
root 19535 0.0 0.0 113372 5216 ? Sl 17:41 0:00 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 507c1c545a907a1b05a4b08acdc1a9fcd671ad4a44d1473faebe724afa1a6693 -addre root 19554 0.0 0.0 1552 248 ? Ss 17:41 0:00 _ /bin/sh -c exit $( (/usr/bin/mkfifo /tmp/tmp/clone_result && /bin/cat /tmp/tmp/clone_result) || /bin/echo 1) root 19591 0.0 0.0 1552 116 ? S 17:41 0:00 _ /bin/sh -c exit $( (/usr/bin/mkfifo /tmp/tmp/clone_result && /bin/cat /tmp/tmp/clone_result) || /bin/echo 1) root 19592 0.0 0.0 1484 252 ? S 17:41 0:00 _ /bin/cat /tmp/tmp/clone_result I am not sure why it is stuck at this?
Not mounting /tmp into docker doesn't seem to make any difference. Moreover I see that when the job is launched, it seems to run a few more containers:
CONTAINER ID IMAGE COMMAND CREATED STATUS PO RTS NAMES e34134bb2f12 docker-public.packages.atlassian.com/sox/atlassian/bitbucket-pipelines-auth-proxy:prod-stable "/bin/sh /usr/local/…" About a minute ago Up About a minute (healthy) system-auth-proxy f0fe3a7fed59 docker-public.packages.atlassian.com/sox/atlassian/bitbucket-pipelines-dvcs-tools:prod-stable "/bin/sh -c 'exit $(…" About a minute ago Up About a minute clone 41d80ab0543c google/pause:latest "/pause" About a minute ago Up About a minute pause 0c79cfe21b71 docker-public.packages.atlassian.com/sox/atlassian/bitbucket-pipelines-runner "/bin/sh -c -x ./ent…" 16 minutes ago Up 16 minutes rol-dev-runner
rol-dev-runner is my own runner and rest 3 are spawned by bitbucket-runner. Doing a ps on them I see this:
We created a self hosted runner to run the pipelines. But we are not able to access the internal links (resources) from docker containers created by runner. We are able to access it, If we add host entry inside the container. But here the containers are creating on demand based on pipeline trigger. So can someone please help me to resolve this issue?
As you can see on last command for each runner ive used same command that bitbucket provides just docker container removed at the start, -it option and --name argument, because with named containers will be unable to autostart.
Known issues: New container of a runner is being created every time server is restarted and old ones are kept.
You can clear old containers using:
docker container exec -it my-first-runner-dind docker system prune docker container exec -it my-second-runner-dind docker system prune
If so that can be problematic as labels are used to match a runner that can run that step, if it falls back to some default it may not be able to run the step as the default runner may not have the capabilities that the runner that matches the labels on the step has.
If its that you want to run on any runner regardless of labels you can simply use the built in label 'self.hosted' than it'll run on any runner.
Currently if no runs-on specified step is running on internal bitbucket runner. I would like to be able to change this behavior and choose per project/workspace what runner to use if there is no runs-on parameter.
Motivation: we have a lot of repositories and would like them on to be compiled/executed on our runner. Now it requires changes for all steps in all bitbucket pipelines
So thats by design as no runs-on is backwards compatible with all existing steps which means to use our cloud as you can mix and match cloud runners and self hosted runners within a pipeline based on lack of no runs-on as you can share caches, artifacts etc between them.
I can see your use case but you would like to enforce no cloud runners at all in that workspace/repository? And have the default in that case be whatever runner you have setup?
Hi @Justin Thomas , I understand that the self-hosted Bitbucket Pipeline runners will not be charged for their build minutes. Will there be a licensing system that charges the users for the amount of self-hosted Runners they use (like Bamboo Server uses at the moment)? Can you give some more insights into the prospected costs involved with using self-hosted Runners?
I managed to get a pod running with the 2 containers; however, setting the "runs-on" attribute to use the runner results in a volume mount error:
Status 500: {"message":"error creating aufs mount to /var/lib/docker/aufs/mnt/b443f67850b3ec86a884864e7fec319d482124fd4a7592461577d085c469286b-init: mount target=/var/lib/docker/aufs/mnt/b443f67850b3ec86a884864e7fec319d482124fd4a7592461577d085c469286b-init data=br:/var/lib/docker/aufs/diff/b443f67850b3ec86a884864e7fec319d482124fd4a7592461577d085c469286b-init=rw:/var/lib/docker/aufs/diff/d97807a81c540ec15035784f2b5bad8cefa8b889c2d008db873b18a6a7b8b1ce=ro+wh:/var/lib/docker/aufs/diff/1ebbacb2bbf0f1616505d881623badb3be59929cd7fc55f8e797b2b04fafaad2=ro+wh:/var/lib/docker/aufs/diff/76da949d3eaa7ac3aea6fc1cbc95f04d8cde204294232d521f0df9d527715b84=ro+wh,dio,xino=/dev/shm/aufs.xino: invalid argument"}
@Vytenis Ščiukas Thank you very much for your response. If I would like to deploy my artifact to the same server where my runner is hosted, how should I set up my deployment steps?
you can use docker volumes, so localdirectory on these docs are your local path, what you need to test is what happens to the files after runner is terminated. If files is being destroyed you can always copy them to other location than working dir, you can also mount as many volumes as you want and deploy those files from working dir to any volume you want.
113 comments