Create
cancel
Showing results for 
Search instead for 
Did you mean: 
Sign up Log in

Bitbucket Pipelines - Runner Autoscaler for Kubernetes - Getting Started

Use this tool to setup and scale Bitbucket Pipelines self-hosted runners. It allows you to:

  • avoid manual set up runners in the Bitbucket UI

  • setup multiple runners at once

  • use the provided functionality for the file-based configuration

  • autoscale runners according to the current workload

 

Requirements

  • There’s a kubernetes cluster running and accessible

  • kubectl is installed in your local machine

  • Operation System of nodes is Linux/amd64

  • Suggested memory for the controller is 8Gb

  • Kubernetes nodes have network access to Bitbucket

  • Bitbucket username

  • Bitbucket app password (base64 representation with repository:read, account:read and runner:write permission)

How to generate an app password

See here.

How to encode base64

Example:

echo -n myGeneratedAppPassword | base64

The -n parameter is to ensure the new line is not included.

 

Creating Kubernetes Resources


The following diagram shows a little bit of how it works.

bitbucket pipelines runner autoscaler for k8s (1).png

There’ll be needed 2 namespaces. In this example, we called it bitbucket-runner-control-plane and runners.

Once everything is created, there’ll be a pod in the bitbucket-runner-control-plane that represents the controller. This pod is responsible for managing runners in the runners namespace.

 

Setup Namespaces

You need to create the namespaces:

apiVersion: v1
kind: Namespace
metadata:
   name: bitbucket-runner-control-plane
---
apiVersion: v1
kind: Namespace
metadata:
   name: runners

 

Setup runner controller config map

The runner controller uses parameters present in a ConfigMap. Use this template to create it.

apiVersion: v1
kind: ConfigMap
metadata:
  name: runners-autoscaler-config
  namespace: bitbucket-runner-control-plane
data:
  runners_config.yaml: |
    constants:
      default_sleep_time_runner_setup: 10 # seconds. Time between runners creation.
      default_sleep_time_runner_delete: 5 # seconds. Time between runners deletion.
      runner_api_polling_interval: 600 # seconds. Time between requests to Bitbucket API.
      runner_cool_down_period: 300 # seconds. Time reserved for runner to set up.
    groups:
      - name: "Runner group 1" # Name of the Runner displayed in the Bitbucket Runner UI.
        workspace: "my guid" # TODO - Replace the workspace guid - if repository is Not specified, it creates workspace runners.
        repository: "my guid" # TODO - Optional. Replace the repo guid - If specified, it will create repository runners.
        labels: # runner will be created with the following labels
          - "grp1"
        namespace: "runners" # namespace where runners are going to be created
        strategy: "percentageRunnersIdle" # in the future more strategies will be supported
        parameters:
          min: 1  # min number of runners - recommended at least 1
          max: 10 # max number of runners
          scaleUpThreshold: 0.5  # The percentage of busy runners at which the number of desired runners are re-evaluated to scale up
          scaleDownThreshold: 0.2  # The percentage of busy runners at which the number of desired runners are re-evaluated to scale up
          scaleUpMultiplier: 1.5  #  scaleUpMultiplier > 1
          scaleDownMultiplier: 0.5  #  0 < scaleDownMultiplier < 1

PS: Review the parameters and replace the workspace and repository (if applicable).

 

Setup runner job config map

This config map has a template that will be used to create new runner jobs. No need do modify anything here.

apiVersion: v1
kind: ConfigMap
metadata:
  name: runners-autoscaler-job-template
  namespace: bitbucket-runner-control-plane
data:
  job.yaml.template: |
    apiVersion: v1
    kind: List
    items:
      - apiVersion: v1
        kind: Secret
        metadata:
          name: runner-oauth-credentials-<%runnerUuid%>  # mandatory, don't modify
          labels:
            accountUuid: <%accountUuid%>  # mandatory, don't modify
        {%- if repositoryUuid %}
            repositoryUuid: <%repositoryUuid%>  # mandatory, don't modify
        {%- endif %}
            runnerUuid: <%runnerUuid%>  # mandatory, don't modify
            runnerNamespace: <%runnerNamespace%>  # mandatory, don't modify
        data:
          oauthClientId: <%oauthClientId_base64%>
          oauthClientSecret: <%oauthClientSecret_base64%>
      - apiVersion: batch/v1
        kind: Job
        metadata:
          name: runner-<%runnerUuid%>  # mandatory, don't modify
        spec:
          template:
            metadata:
              labels:
                customer: shared
                accountUuid: <%accountUuid%>  # mandatory, don't modify
                runnerUuid: <%runnerUuid%>  # mandatory, don't modify
            {%- if repositoryUuid %}
                repositoryUuid: <%repositoryUuid%>  # mandatory, don't modify
            {%- endif %}
                runnerNamespace: <%runnerNamespace%>  # mandatory, don't modify
            spec:
              containers:
                - name: runner
                  image: docker-public.packages.atlassian.com/sox/atlassian/bitbucket-pipelines-runner  # mandatory, don't modify
                  resources:
                    requests:
                      memory: "4Gi"
                      cpu: "1000m"
                    limits:
                      memory: "4Gi"
                      cpu: "1000m"
                  env:
                    - name: ACCOUNT_UUID  # mandatory, don't modify
                      value: "{<%accountUuid%>}"  # mandatory, don't modify
                {%- if repositoryUuid %}
                    - name: REPOSITORY_UUID  # mandatory, don't modify
                      value: "{<%repositoryUuid%>}"  # mandatory, don't modify
                {%- endif %}
                    - name: RUNNER_UUID  # mandatory, don't modify
                      value: "{<%runnerUuid%>}"  # mandatory, don't modify
                    - name: OAUTH_CLIENT_ID
                      valueFrom:
                        secretKeyRef:
                          name: runner-oauth-credentials-<%runnerUuid%>
                          key: oauthClientId
                    - name: OAUTH_CLIENT_SECRET
                      valueFrom:
                        secretKeyRef:
                          name: runner-oauth-credentials-<%runnerUuid%>
                          key: oauthClientSecret
                    - name: WORKING_DIRECTORY
                      value: "/tmp"
                  volumeMounts:
                    - name: tmp
                      mountPath: /tmp
                    - name: docker-containers
                      mountPath: /var/lib/docker/containers
                      readOnly: true
                    - name: var-run
                      mountPath: /var/run
                - name: docker
                  image: docker:dind
                  securityContext:
                    privileged: true
                  volumeMounts:
                    - name: tmp
                      mountPath: /tmp
                    - name: docker-containers
                      mountPath: /var/lib/docker/containers
                    - name: var-run
                      mountPath: /var/run
              restartPolicy: OnFailure
              volumes:
                - name: tmp
                - name: docker-containers
                - name: var-run
              nodeSelector:
                customer: shared
          backoffLimit: 6
          completions: 1
          parallelism: 1

PS: Notice that there’s a nodeSelector tag above. Make sure your node group has this label.

Setup Service Account and Role

Next, we need to create the ServiceAccount, ClusterRole and ClusterRoleBinding to be used by the controller:

apiVersion: v1
kind: ServiceAccount
metadata:
  name: runners-autoscaler
  namespace: bitbucket-runner-control-plane
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: runners-autoscaler
rules:
- apiGroups:
  - ""
  resources:
  - namespaces
  verbs:
  - get
  - create
- apiGroups:
  - ""
  resources:
  - secrets
  verbs:
  - create
  - delete
- apiGroups:
  - batch
  resources:
  - jobs
  verbs:
  - create
  - delete
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: runners-autoscaler
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: runners-autoscaler
subjects:
  - kind: ServiceAccount
    name: runners-autoscaler
    namespace: bitbucket-runner-control-plane

Setup runner controller deployment

Finally, there’s the Secret and the Deployment of the controller. Ensure you set up the bitbucketClientSecret and the username properly.

apiVersion: v1
kind: List
items:
  - apiVersion: v1
    kind: Secret
    metadata:
      name: runner-bitbucket-credentials
      namespace: bitbucket-runner-control-plane
    data:
      bitbucketClientSecret: "my-base64-encoded-secret" # TODO replace with the base64 encoded bitbucket app password
  - apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: runner-controller
      namespace: bitbucket-runner-control-plane
      labels:
        app: runner-controller
    spec:
      selector:
        matchLabels:
          app: runner-controller
      template:
        metadata:
          name: runner-controller-pod
          labels:
            app: runner-controller
        spec:
          serviceAccountName: runners-autoscaler
          containers:
          - name: runner-controller
            image: runners-autoscaller-local:latest
            volumeMounts:
            - name: runners-autoscaler-config
              mountPath: /opt/conf/config
              readOnly: true
            - name: runners-autoscaler-job-template
              mountPath: /opt/conf/job_template
              readOnly: true
            env:
              - name: BITBUCKET_USERNAME
                value: 'my-user-name' # TODO replace with username
              - name: BITBUCKET_APP_PASSWORD
                valueFrom:
                  secretKeyRef:
                    name: runner-bitbucket-credentials
                    key: bitbucketClientSecret
            imagePullPolicy: IfNotPresent
          volumes:
            - name: runners-autoscaler-config
              configMap:
                name: runners-autoscaler-config
                defaultMode: 0644
                items:
                  - key: runners_config.yaml
                    path: runners_config.yaml
            - name: runners-autoscaler-job-template
              configMap:
                name: runners-autoscaler-job-template
                defaultMode: 0644
                items:
                  - key: job.yaml.template
                    path: job.yaml.template

Troubleshooting

Check if the runners are being created in the target namespace.

kubectl get pods -n runners

If they are having problems, describe the pods to analyse the probable cause.

If the runners were not created, see the controller logs.

export CONTROLLER_POD_NAME=$(kubectl get pods -n bitbucket-runner-control-plane -o jsonpath='{.items[*].metadata.name}')
kubectl logs -f $CONTROLLER_POD_NAME -n bitbucket-runner-control-plane

 

Things to review in case of errors:

  • Is the Bitbucket Username correct?

  • Is the Bitbucket App Password set up properly? Is it in base64 format?

  • Was the resources created in the right namespaces?

  • Does the node have network connectivity to Bitbucket?

  • Does the node have a label that matches the nodeSelector in the config map for runner jobs?

Recommendations

Secrets

  • By default, Kubernetes Secrets are stored unencrypted. You need to enable Encryption at Rest. Read more.

  • If you usually store secrets using 3rd party tools such as Vault or AWS Secrets Manager, consider using the External Secrets Operator.

Node Auto Scaler

Your cluster will need a horizontal autoscaler for the nodes. We recommend using a tool that is optimized for large batch or job based workloads such as Escalator. Please check the deployment docs. For AWS provider, use the AWS deployment instead of the regular one.

Configuring Nodes

You will notice there's a nodeSelector in the config map for the runner job.

Therefore, the nodes where the runners will be running on need to have a label that matches it. In AWS EKS, this can be configured via Managed Node Groups.

This label also must match the one you configured in escalator config map.

Tweaking resources

You will notice that the resources tag is defined inside the config map for the runner job.

It might worthing tweaking the memory/cpu limits according to your needs.

For example, if you want to use an 8Gb instance size, it might not worth using 4Gi since it will take slightly more than half of the allocatable memory therefore it would allow only 1 runner pod per instance.

Source code

The Bitbucket Pipelines Runner Autoscaler for Kubernetes code is open source and can be found here.

 

Leaving Feedback

We're very interested in any comments you may have around your experience with the Runner Autoscaler for Kubernetes! Please leave your feedback in this community group. 

Need Help?

For further assistance on using this tool:

  • Post your question in this community group.

5 comments

Comment

Log in or Sign up to comment
Dima Grushkin September 28, 2022

Hi here. What image we should use instead of unners-autoscaller-local?

Normal BackOff 21s kubelet Back-off pulling image "runners-autoscaller-local:latest"
Warning Failed 21s kubelet Error: ImagePullBackOff
Normal Pulling 8s (x2 over 22s) kubelet Pulling image "runners-autoscaller-local:latest"

Like Alex Radzishevsky likes this
Alex Radzishevsky October 14, 2022

I just checked their sourcecode and looks like it should be "bitbucketpipelines/runners-autoscaler:1.7.0"

Like # people like this
Alex Radzishevsky October 15, 2022

My feedback: I managed to successfully set it up on GCP. All works, auto-scaling works. I have only one concern: setup requires at least one runner pod to be running. And runners require (in my case) relatively expensive configuration (CPU / RAM), so I will always have large runner up while idle. Is there any alternative setup when we will have something small running, just to keep connection to bitbucket and to control bigger runners that are actually performing builds and can be scaled down to zero ?

Deleted user November 7, 2022

How

size: 2x

works with this implementation? 

Is it possible to have job templates for different runner sizes? 

Filip Swiatczak October 20, 2023

I have the autoscaler working famously. 
Next up is the Escalator to auto scale the cluster size. I've run into a few issues with that project (atlassian/escalator: Escalator is a batch or job optimized horizontal autoscaler for Kubernetes (github.com)
so here to ask if anyone found a similar Atlassian group/channel for the Escalator please? Thank you!

TAGS
AUG Leaders

Atlassian Community Events