Bitbucket Pipelines Runners is now in open beta

We are excited to announce the open beta program for self-hosted runners. Bitbucket Pipelines Runners is available to everyone. Please try it and let us know your feedback. If you have any issues with runners please raise a support ticket.

Runners allows you to execute Bitbucket Pipeline builds your own infrastructure, and you won’t be charged for the build minutes used by your self-hosted runner.

In the last few months, the early access group provided a lot of great feedback. This has helped us in prioritizing and road mapping runners' features. 

Runners roadmap*:

  • ECR and SSH keys: May 2021
  • Workspace runners: June 2021
  • Remove limits on runners: July 2021
    • CPU/Memory
    • Multiple runners per machine
    • Configurable docker daemon
  • Windows runners: Q3 CY21

Documentation: https://support.atlassian.com/bitbucket-cloud/docs/runners/

Thanks - I’m looking forward to shaping the future of Atlassian products with you!

* The roadmap contains forward-looking statements which involve uncertainties when providing estimated release dates and descriptions for commercial features. All information regarding forward-looking statements involves known and unknown risks, uncertainties, and is subject to change.

 

information Note for updates on runners please follow the ticket https://jira.atlassian.com/browse/BCLOUD-16995

We're happy to announce that we have released workspace level runners configuration as part of the self-hosted runners open beta release. Please, try it out and let us know any feedback. For more details, check out our public documentation.

We also started looking into adding support for higher memory & CPU, multiple runners per machine, configurable docker daemon and other improvements that will provide more flexibility and control.

113 comments

Ingmar May 11, 2021

Very nice, looking forward to using this! One question though:

On Set up and use runners for Linux it says that "Currently, we only support running one runner per host machine." - does that mean that only a single pipeline can execute concurrently?

Like itzmonk3y likes this
Justin Thomas
Atlassian Team
Atlassian Team members are employees working across the company in a wide variety of roles.
May 11, 2021

@Ingmar Yes, a runner can execute only one step at a given time. We currently have a restriction where you cannot start multiple runners on a single machine, we will be adding support for that in July. Thanks for trying the runner.

Like # people like this
Rochak Saini May 12, 2021

Hi @Justin Thomas  Any plan on providing the functionality of being able to access GPU by the runner? Our use cases are Deep Learning based, so are our test cases.

Like # people like this
Colin Panisset (Cevo) May 12, 2021

@Justin Thomasat the moment it appears that runners have to be set up specifically per-repo; which combined with the limitation of one runner per machine means that we would have to dedicate a specific machine for a specific repo. Will the workspace runners address this problem?

Like # people like this
Ovidiu Gabriel May 12, 2021

@Justin Thomas  This solution finally allowed us to run end-to-end integrations tests on our real devices.

Good job!

Like # people like this
Martin Cassidy May 12, 2021

@Justin Thomasthis is excellent, is there a standard channel to send feedback/bug reports?

 

Dan Foster May 12, 2021

This is exciting! Is it possible to share a running between multiple repositories? Ideally set as a runner at the workspace level, so it can be used by any repository owned by that workspace

Like # people like this
Jochen Rieger May 12, 2021

@Dan Foster Seems to be on the Roadmap for June... looking forward to this extension, too!

Like # people like this
Jon Warden May 12, 2021

Great news - looking forward to seeing much progress on runners in the coming months.

Paul Fox May 12, 2021

We do a lot of hardware-in-the-loop testing, so Docker is a hindrance rather than a help. Will it be possible to run pipes on local runners without using Docker in the future?

Justin Thomas
Atlassian Team
Atlassian Team members are employees working across the company in a wide variety of roles.
May 12, 2021

@Rochak Saini As part of removing the limits from self-hosted runners we will be exploring the option to allow access to GPUs. Thanks for your feedback.

Like Simon Lee likes this
Justin Thomas
Atlassian Team
Atlassian Team members are employees working across the company in a wide variety of roles.
May 12, 2021

@Colin Panisset (Cevo) Yes, workspace runners will allow you to use a single runner between multiple repositories, we will also soon remove the limitation of one runner per machine.

Justin Thomas
Atlassian Team
Atlassian Team members are employees working across the company in a wide variety of roles.
May 12, 2021

@Martin Cassidy Please use the community to provide your feedback, but if you run into any bug or have issues using the runner please raise a support ticket.

Justin Thomas
Atlassian Team
Atlassian Team members are employees working across the company in a wide variety of roles.
May 12, 2021

@Paul Fox Allowing shell access to the host machine is on our long-term roadmap, we will start exploring it after Windows Runners. Thanks for your feedback.

Brad Vrabete May 13, 2021

Any plans for having the runner hosted in a local Kubernetes cluster? That would greatly enhance the flexibility (and still allow use cases such using the GPU if the cluster has that capability) 

Sander Mol May 17, 2021

@Justin ThomasAs we are now running this on our own hardware, would it be possible to support this for long-running processes that exceed the 2h `max-time` limit that is in place for normal pipelines?

Like Andy Liu likes this
Justin Thomas
Atlassian Team
Atlassian Team members are employees working across the company in a wide variety of roles.
May 17, 2021

@san As part of removing the limits on runners, we will be also looking at removing the max-time limit.

Like Sander Mol likes this
Justin Thomas
Atlassian Team
Atlassian Team members are employees working across the company in a wide variety of roles.
May 17, 2021

@Brad Vrabete You can currently start a runner in a Kubernetes cluster, auto-scaling is not yet supported. Here is an example of Kubernetes spec for a self-hosted runner. Please let me know your feedback.

apiVersion: v1
kind: List
items:
- apiVersion: v1
kind: Secret
metadata:
name: runner-oauth-credentials
# labels:
# accountUuid: # Add your account uuid to optionally allow finding the secret for an account
# repositoryUuid: # Add your repository uuid to optionally allow finding the secret for a repository
# runnerUuid: # Add your runner uuid to optionally allow finding the secret for a particular runner
data:
oauthClientId: # add your base64 encoded oauth client id here
oauthClientSecret: # add your base64 encoded oauth client secret here
- apiVersion: batch/v1
kind: Job
metadata:
name: runner
spec:
template:
# metadata:
# labels:
# accountUuid: # Add your account uuid to optionally allow finding the pods for an account
# repositoryUuid: # Add your repository uuid to optionally allow finding the pods for a repository
# runnerUuid: # Add your runner uuid to optionally allow finding the pods for a particular runner
spec:
containers:
- name: runner
image: docker-public.packages.atlassian.com/sox/atlassian/bitbucket-pipelines-runner
env:
- name: ACCOUNT_UUID
value: # Add your account uuid here
- name: REPOSITORY_UUID
value: # Add your repository uuid here
- name: RUNNER_UUID
value: # Add your runner uuid here
- name: OAUTH_CLIENT_ID
valueFrom:
secretKeyRef:
name: runner-oauth-credentials
key: oauthClientId
- name: OAUTH_CLIENT_SECRET
valueFrom:
secretKeyRef:
name: runner-oauth-credentials
key: oauthClientSecret
- name: WORKING_DIRECTORY
value: "/tmp"
volumeMounts:
- name: tmp
mountPath: /tmp
- name: docker-containers
mountPath: /var/lib/docker/containers
readOnly: true # the runner only needs to read these files never write to them
- name: var-run
mountPath: /var/run
- name: docker-in-docker
image: docker:20.10.5-dind
securityContext:
privileged: true # required to allow docker in docker to run and assumes the namespace your applying this to has a pod security policy that allows privilege escalation
volumeMounts:
- name: tmp
mountPath: /tmp
- name: docker-containers
mountPath: /var/lib/docker/containers
- name: var-run
mountPath: /var/run
restartPolicy: OnFailure # this allows the runner to restart locally if it was to crash
volumes:
- name: tmp # required to share a working directory between docker in docker and the runner
- name: docker-containers # required to share the containers directory between docker in docker and the runner
- name: var-run # required to share the docker socket between docker in docker and the runner
# backoffLimit: 6 # this is the default and means it will retry upto 6 times if it crashes before it considers itself a failure with an exponential backoff between
# completions: 1 # this is the default the job should ideally never complete as the runner never shuts down successfully
# parallelism: 1 # this is the default their should only be one instance of this particular runner
Like # people like this
Brad Vrabete May 18, 2021

@Justin Thomas Thanks for that.

Maybe worth adding to the documentation: the labels  derived from UUID cannot use curly brackets but the runner require the environmental variables linked to UUID to have the bracket included in the string.  It would be helpful to mention that in the template/documentation.


metadata:
 labels:          # Add your account uuid to optionally allow finding the secret for an account          
accountUuid: xxxx-[...]-xxxx
#[...]
    spec:      
containers:
- name: runner
#[...]
          
env:            - nameACCOUNT_UUID              value"{xxx...-xxx-xxxx}"

 

Like Justin Thomas likes this
Brad Vrabete May 18, 2021

As an update to the documentation suggestions: the environmental variables defined in the secret (OAUTH) must not be enclosed in quotes. Hopefully that helps other people trying the same.

I have the runner working in my local cluster now.

If there is a suggestion I could make (and some other people said the same): requiring a runner per repository is quite excessive. A runner should be tied to the target infrastructure. Having to define a new runner for each repo makes very little sense when for the rest one only has to define jobs that get executed in Bitbucket pipeline. 

Like # people like this
Luis May 21, 2021

Hi @Justin Thomas ,

first of all, I love the new functionality!

I was thinking if there is a way to get the status of a runner with API rest or some other way to show it in a component in Statuspage

Kind regards, Luis

polRk May 25, 2021

Why runners in MacOS so slow? 15 minutes for simple nodejs project with build and test phases? In atlassian pipelines it resolves under 5s?

Erwan d_Orgeville May 25, 2021

I am trying to make the runner work under Docker for Desktop on Windows using the following `docker-compose.yml` file:

 

version: '2.0'
  services:
    bitbucket-runner:
      image: docker-public.packages.atlassian.com/sox/atlassian/bitbucket-pipelines-runner
      container_name: runner
      volumes:
        - tmp:/tmp
        - /var/run/docker.sock:/var/run/docker.sock
        - /var/lib/docker/containers:/var/lib/docker/containers:ro
      environment:
        ACCOUNT_UUID:
        REPOSITORY_UUID:
        RUNNER_UUID:
        OAUTH_CLIENT_ID:
        OAUTH_CLIENT_SECRET:
        WORKING_DIRECTORY: /tmp

volumes:
tmp:

I put my credentials in `.env`, `docker-compose up` the runner comes online.

Unfortunately, any pipeline that tries running on it results in an error:

An error occurred whilst creating container exec.
com.github.dockerjava.api.exception.ConflictException: Status 409: {"message":"Container 84fbd9c290030fc016958b93671ba431fb199d90f63e4cb60aab94fd68dd0adc is not running"}

The container ID changes on each pipeline. I get that Windows is not supported, but is there anything obvious I may have missed?

Thanks

Like Alban Peignier likes this
Riccardo Trivellato May 26, 2021

thanks @Brad Vrabete for yaml file!
I'm using your configuration to run a deployment instead of job inside my k8s (gke) cluster but everytime that i run a pipeline i get this error
```
fatal: unable to access https://x-token-auth:$REPOSITORY_OAUTH_ACCESS_TOKEN@bitbucket.org/*/*.git/: SSL certificate problem: self signed certificate in certificate chain
```
How could i fix it? Thanks a lot!

Brad Vrabete May 26, 2021

@Riccardo Trivellato In the end I gave up on using BitButcket local jobs. I wasted a day and till it was not clear how I could run a kubectl script. I ended up using Azure local pipelines that are not tied to a repository and have much better documentation. 
Sorry Bitbucket team, you need to catch up with your competition.  

Like # people like this

Comment

Log in or Sign up to comment
TAGS
AUG Leaders

Atlassian Community Events