This article is a result of extensive research and practical experience in setting up and running Runner auto scaler on an on-premise Kubernetes cluster setup at a mid-stage startup company. This is meant as a quick start guide for engineers to get started with Runner auto scaler quickly to make their runners easy to extend and operate at scale.
Recommended: Go through the official MicroK8s documentation and understand how MicroK8s works before proceeding.
sudo apt-get update -y
sudo apt-get upgrade -y
sudo apt install -y apt-transport-https curl gnupg lsb-release
sudo snap install microk8s --classic
sudo microk8s status --wait-ready
sudo vi /etc/netplan/00-installer-config.yaml
sudo netplan apply
hostnamectl set-hostname kubemaster # or any other name you prefer
Generate a join command on the master:
sudo microk8s add-node
Follow the on-screen instructions to join worker nodes using the generated token.
You need Helm to install the Runner Autoscaler as it is packaged as a Helm chart.
sudo snap install helm --classic
sudo apt-get update -y
sudo apt install -y vim
sudo snap install microk8s --classic
sudo microk8s status --wait-ready
sudo microk8s join <MASTER_NODE_IP>:<PORT>/<TOKEN> --worker
git clone git@bitbucket.org:bitbucketpipelines/runners-autoscaler.git
cd runners-autoscaler/kustomize
git checkout 3.7.0 # Make sure to check out the latest available version
Edit the runners_config.yaml
and kustomization.yaml
files to include your Bitbucket OAuth credentials.
sudo vi values/runners_config.yaml
sudo vi values/kustomization.yaml
runners_config.yaml
constants:
default_sleep_time_runner_setup: 10 # value in seconds
default_sleep_time_runner_delete: 5 # value in seconds
runner_api_polling_interval: 600 # value in seconds
runner_cool_down_period: 300 # value in seconds
groups:
- name: "Linux Docker Runners"
workspace: "YOURWORKSPACENAME" # workspace name
labels:
- "self.hosted"
- "linux"
- "runner.docker"
namespace: "default"
strategy: "percentageRunnersIdle"
parameters:
min: 4
max: 8
scale_up_threshold: 0.5
scale_down_threshold: 0.2
scale_up_multiplier: 1.5
scale_down_multiplier: 0.5
resources:
requests:
memory: "4Gi"
cpu: "2000m"
limits:
memory: "4Gi"
cpu: "2000m"
kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../base
configMapGenerator:
- name: runners-autoscaler-config
files:
- runners_config.yaml
options:
disableNameSuffixHash: true
namespace: bitbucket-runner-control-plane
commonLabels:
app.kubernetes.io/part-of: runners-autoscaler
images:
- name: bitbucketpipelines/runners-autoscaler
newTag: 3.7.0 # Ensure this matches the version you checked out earlier.
patches:
- target:
version: v1
kind: Secret
name: runner-bitbucket-credentials
patch: |-
### Option 1 ###
- op: add
path: /data/bitbucketOauthClientId
value: "ENTER_BITBUCKET_OAUTH_CLIENT_ID_HERE_WITHIN_QUOTES"
- op: add
path: /data/bitbucketOauthClientSecret
value: "ENTER_BITBUCKET_OAUTH_CLIENT_SECRET_HERE_WITHIN_QUOTES"
### Option 2 ###
# - op: add
# path: /data/bitbucketUsername
# value: ""
# - op: add
# path: /data/bitbucketAppPassword
# value: ""
- target:
version: v1
kind: Deployment
labelSelector: "inject=runners-autoscaler-envs"
patch: |-
### Option 1 ###
- op: replace
path: /spec/template/spec/containers/0/env
value:
- name: BITBUCKET_OAUTH_CLIENT_ID
valueFrom:
secretKeyRef:
key: bitbucketOauthClientId
name: runner-bitbucket-credentials
- name: BITBUCKET_OAUTH_CLIENT_SECRET
valueFrom:
secretKeyRef:
key: bitbucketOauthClientSecret
name: runner-bitbucket-credentials
### Option 2 ###
# - op: replace
# path: /spec/template/spec/containers/0/env
# value:
# - name: BITBUCKET_USERNAME
# valueFrom:
# secretKeyRef:
# key: bitbucketUsername
# name: runner-bitbucket-credentials
# - name: BITBUCKET_APP_PASSWORD
# valueFrom:
# secretKeyRef:
# key: bitbucketAppPassword
# name: runner-bitbucket-credentials
sudo microk8s kubectl get nodes
sudo microk8s kubectl get pods -n bitbucket-runner-control-plane --field-selector=status.phase=Running
sudo microk8s kubectl logs -f runner-controller-<pod-name> -n bitbucket-runner-control-plane
runners_config.yaml
.sudo microk8s kubectl top nodes
--max-log-requests
is not limiting the logs excessively.
Use MicroCeph or just Ceph to manage storage that backs the MicroK8s nodes for better flexibility and scalability. Refer to the MicroCeph Multi-Node Install Guide.
Gajesh Bhat
DevOps Engineer
Motorola Solutions
Vancouver, BC, Canada
1 accepted answer
1 comment