Use this tool to setup and scale Bitbucket Pipelines self-hosted runners. It allows you to:
avoid manual set up runners in the Bitbucket UI
setup multiple runners at once
use the provided functionality for the file-based configuration
autoscale runners according to the current workload
There’s a kubernetes cluster running and accessible
kubectl is installed in your local machine
Operation System of nodes is Linux/amd64
Suggested memory for the controller is 8Gb
Kubernetes nodes have network access to Bitbucket
Bitbucket username
Bitbucket app password (base64 representation with repository:read, account:read and runner:write permission)
Example:
echo -n myGeneratedAppPassword | base64
The -n parameter is to ensure the new line is not included.
The following diagram shows a little bit of how it works.
There’ll be needed 2 namespaces. In this example, we called it bitbucket-runner-control-plane and runners.
Once everything is created, there’ll be a pod in the bitbucket-runner-control-plane that represents the controller. This pod is responsible for managing runners in the runners namespace.
You need to create the namespaces:
apiVersion: v1
kind: Namespace
metadata:
name: bitbucket-runner-control-plane
---
apiVersion: v1
kind: Namespace
metadata:
name: runners
The runner controller uses parameters present in a ConfigMap. Use this template to create it.
apiVersion: v1
kind: ConfigMap
metadata:
name: runners-autoscaler-config
namespace: bitbucket-runner-control-plane
data:
runners_config.yaml: |
constants:
default_sleep_time_runner_setup: 10 # seconds. Time between runners creation.
default_sleep_time_runner_delete: 5 # seconds. Time between runners deletion.
runner_api_polling_interval: 600 # seconds. Time between requests to Bitbucket API.
runner_cool_down_period: 300 # seconds. Time reserved for runner to set up.
groups:
- name: "Runner group 1" # Name of the Runner displayed in the Bitbucket Runner UI.
workspace: "my guid" # TODO - Replace the workspace guid - if repository is Not specified, it creates workspace runners.
repository: "my guid" # TODO - Optional. Replace the repo guid - If specified, it will create repository runners.
labels: # runner will be created with the following labels
- "grp1"
namespace: "runners" # namespace where runners are going to be created
strategy: "percentageRunnersIdle" # in the future more strategies will be supported
parameters:
min: 1 # min number of runners - recommended at least 1
max: 10 # max number of runners
scaleUpThreshold: 0.5 # The percentage of busy runners at which the number of desired runners are re-evaluated to scale up
scaleDownThreshold: 0.2 # The percentage of busy runners at which the number of desired runners are re-evaluated to scale up
scaleUpMultiplier: 1.5 # scaleUpMultiplier > 1
scaleDownMultiplier: 0.5 # 0 < scaleDownMultiplier < 1
PS: Review the parameters and replace the workspace and repository (if applicable).
This config map has a template that will be used to create new runner jobs. No need do modify anything here.
apiVersion: v1
kind: ConfigMap
metadata:
name: runners-autoscaler-job-template
namespace: bitbucket-runner-control-plane
data:
job.yaml.template: |
apiVersion: v1
kind: List
items:
- apiVersion: v1
kind: Secret
metadata:
name: runner-oauth-credentials-<%runnerUuid%> # mandatory, don't modify
labels:
accountUuid: <%accountUuid%> # mandatory, don't modify
{%- if repositoryUuid %}
repositoryUuid: <%repositoryUuid%> # mandatory, don't modify
{%- endif %}
runnerUuid: <%runnerUuid%> # mandatory, don't modify
runnerNamespace: <%runnerNamespace%> # mandatory, don't modify
data:
oauthClientId: <%oauthClientId_base64%>
oauthClientSecret: <%oauthClientSecret_base64%>
- apiVersion: batch/v1
kind: Job
metadata:
name: runner-<%runnerUuid%> # mandatory, don't modify
spec:
template:
metadata:
labels:
customer: shared
accountUuid: <%accountUuid%> # mandatory, don't modify
runnerUuid: <%runnerUuid%> # mandatory, don't modify
{%- if repositoryUuid %}
repositoryUuid: <%repositoryUuid%> # mandatory, don't modify
{%- endif %}
runnerNamespace: <%runnerNamespace%> # mandatory, don't modify
spec:
containers:
- name: runner
image: docker-public.packages.atlassian.com/sox/atlassian/bitbucket-pipelines-runner # mandatory, don't modify
resources:
requests:
memory: "4Gi"
cpu: "1000m"
limits:
memory: "4Gi"
cpu: "1000m"
env:
- name: ACCOUNT_UUID # mandatory, don't modify
value: "{<%accountUuid%>}" # mandatory, don't modify
{%- if repositoryUuid %}
- name: REPOSITORY_UUID # mandatory, don't modify
value: "{<%repositoryUuid%>}" # mandatory, don't modify
{%- endif %}
- name: RUNNER_UUID # mandatory, don't modify
value: "{<%runnerUuid%>}" # mandatory, don't modify
- name: OAUTH_CLIENT_ID
valueFrom:
secretKeyRef:
name: runner-oauth-credentials-<%runnerUuid%>
key: oauthClientId
- name: OAUTH_CLIENT_SECRET
valueFrom:
secretKeyRef:
name: runner-oauth-credentials-<%runnerUuid%>
key: oauthClientSecret
- name: WORKING_DIRECTORY
value: "/tmp"
volumeMounts:
- name: tmp
mountPath: /tmp
- name: docker-containers
mountPath: /var/lib/docker/containers
readOnly: true
- name: var-run
mountPath: /var/run
- name: docker
image: docker:dind
securityContext:
privileged: true
volumeMounts:
- name: tmp
mountPath: /tmp
- name: docker-containers
mountPath: /var/lib/docker/containers
- name: var-run
mountPath: /var/run
restartPolicy: OnFailure
volumes:
- name: tmp
- name: docker-containers
- name: var-run
nodeSelector:
customer: shared
backoffLimit: 6
completions: 1
parallelism: 1
PS: Notice that there’s a nodeSelector tag above. Make sure your node group has this label.
Next, we need to create the ServiceAccount, ClusterRole and ClusterRoleBinding to be used by the controller:
apiVersion: v1
kind: ServiceAccount
metadata:
name: runners-autoscaler
namespace: bitbucket-runner-control-plane
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: runners-autoscaler
rules:
- apiGroups:
- ""
resources:
- namespaces
verbs:
- get
- create
- apiGroups:
- ""
resources:
- secrets
verbs:
- create
- delete
- apiGroups:
- batch
resources:
- jobs
verbs:
- create
- delete
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: runners-autoscaler
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: runners-autoscaler
subjects:
- kind: ServiceAccount
name: runners-autoscaler
namespace: bitbucket-runner-control-plane
Finally, there’s the Secret and the Deployment of the controller. Ensure you set up the bitbucketClientSecret
and the username properly.
apiVersion: v1
kind: List
items:
- apiVersion: v1
kind: Secret
metadata:
name: runner-bitbucket-credentials
namespace: bitbucket-runner-control-plane
data:
bitbucketClientSecret: "my-base64-encoded-secret" # TODO replace with the base64 encoded bitbucket app password
- apiVersion: apps/v1
kind: Deployment
metadata:
name: runner-controller
namespace: bitbucket-runner-control-plane
labels:
app: runner-controller
spec:
selector:
matchLabels:
app: runner-controller
template:
metadata:
name: runner-controller-pod
labels:
app: runner-controller
spec:
serviceAccountName: runners-autoscaler
containers:
- name: runner-controller
image: runners-autoscaller-local:latest
volumeMounts:
- name: runners-autoscaler-config
mountPath: /opt/conf/config
readOnly: true
- name: runners-autoscaler-job-template
mountPath: /opt/conf/job_template
readOnly: true
env:
- name: BITBUCKET_USERNAME
value: 'my-user-name' # TODO replace with username
- name: BITBUCKET_APP_PASSWORD
valueFrom:
secretKeyRef:
name: runner-bitbucket-credentials
key: bitbucketClientSecret
imagePullPolicy: IfNotPresent
volumes:
- name: runners-autoscaler-config
configMap:
name: runners-autoscaler-config
defaultMode: 0644
items:
- key: runners_config.yaml
path: runners_config.yaml
- name: runners-autoscaler-job-template
configMap:
name: runners-autoscaler-job-template
defaultMode: 0644
items:
- key: job.yaml.template
path: job.yaml.template
Check if the runners are being created in the target namespace.
kubectl get pods -n runners
If they are having problems, describe the pods to analyse the probable cause.
If the runners were not created, see the controller logs.
export CONTROLLER_POD_NAME=$(kubectl get pods -n bitbucket-runner-control-plane -o jsonpath='{.items[*].metadata.name}')
kubectl logs -f $CONTROLLER_POD_NAME -n bitbucket-runner-control-plane
Things to review in case of errors:
Is the Bitbucket Username correct?
Is the Bitbucket App Password set up properly? Is it in base64 format?
Was the resources created in the right namespaces?
Does the node have network connectivity to Bitbucket?
Does the node have a label that matches the nodeSelector in the config map for runner jobs?
By default, Kubernetes Secrets are stored unencrypted. You need to enable Encryption at Rest. Read more.
If you usually store secrets using 3rd party tools such as Vault or AWS Secrets Manager, consider using the External Secrets Operator.
Your cluster will need a horizontal autoscaler for the nodes. We recommend using a tool that is optimized for large batch or job based workloads such as Escalator. Please check the deployment docs. For AWS provider, use the AWS deployment instead of the regular one.
You will notice there's a nodeSelector in the config map for the runner job.
Therefore, the nodes where the runners will be running on need to have a label that matches it. In AWS EKS, this can be configured via Managed Node Groups.
This label also must match the one you configured in escalator config map.
You will notice that the resources tag is defined inside the config map for the runner job.
It might worthing tweaking the memory/cpu limits according to your needs.
For example, if you want to use an 8Gb instance size, it might not worth using 4Gi since it will take slightly more than half of the allocatable memory therefore it would allow only 1 runner pod per instance.
The Bitbucket Pipelines Runner Autoscaler for Kubernetes code is open source and can be found here.
We're very interested in any comments you may have around your experience with the Runner Autoscaler for Kubernetes! Please leave your feedback in this community group.
For further assistance on using this tool:
Post your question in this community group.
Marcos Sampaio
5 comments