Missed Team ’24? Catch up on announcements here.

×
Create
cancel
Showing results for 
Search instead for 
Did you mean: 
Sign up Log in

Runner Autoscaler has no information about previous configuration

Aaron.Luo May 15, 2022

Hi team,

We've setup a group with Runner Autoscaler for workspace-a. For some reason, we decided to move workspace-b, so we updated the runners-autoscaler-config ConfigMap to have a group for workspace-b and removed the group for workspace-a.

After the Runner Autoscaler got restarted, it created runners for the group for workspace-b, but didn't delete runners for workspace-a, which left some orphaned runners for workspace-a in the k8s cluster.

This also happened while we renamed a group. A new group was created, but the old group was still in Bitbucket cloud, so did the related orphaned runners out of control.

We wonder if Runner Autoscaler could clean resources automatically that were in previous configuration but are not in the current configuration. Thanks.

Kind regards,

Aaron

2 comments

Comment

Log in or Sign up to comment
Aaron.Luo May 16, 2022

We may make a diff between the new config and the last applied config (the kubectl.kubernetes.io/last-applied-configuration annotation in the ConfigMap), then decide which are to be deleted and which are to be added.

Oleksandr Kyrdan
Atlassian Team
Atlassian Team members are employees working across the company in a wide variety of roles.
December 26, 2022

Hi @Aaron.Luo 

Thank you for your question!

Runner Autoscaler is a stateless tool. The sources of truth for it are:
1) your local config and 2) your cloud runners configuration in Bitbucket

Runner Autoscaler doesn’t keep the previous state of the resources.

The main idea for Runner Autoscaler is to provide a clear solution for scaling and keep this tool simple.

So, in the nearest future we are not planning to support the case you described.

 

Best regards,
Oleksandr Kyrdan

TAGS
AUG Leaders

Atlassian Community Events