Hi team,
We did a test upon scaling out upon pending builds and observed some interesting result.
It seems that Autoscaler only considered the running runners, though a few runners triggered by autoscaler were still under provisioning (The pods were in pending status because kubernetes node resources were not enough and under provisioning. The runners status were "UNREGISTERED" in Bitbucket Cloud), which caused more than max runners were triggered.
Same as scaling in, it didn't consider to clean up the "UNREGISTERED" runners in both bitbucket cloud and k8s cluster.
Here's the settings that I did the test.
constants:
default_sleep_time_runner_setup: 10 # seconds. Time between runners creation.
default_sleep_time_runner_delete: 5 # seconds. Time between runners deletion.
runner_api_polling_interval: 120 # seconds. Time between requests to Bitbucket API.
runner_cool_down_period: 300 # seconds. Time reserved for runner to set up.
groups:
- name: "Group1" # Name of the Runner displayed in the Bitbucket Runner UI.
workspace: "my-workspace" #
# repository: "my guid" # TODO - Optional. Replace the repo guid - If specified, it will create repository runners.
labels: # runner will be created with the following labels
- "self.hosted"
- "group1"
namespace: "my-namespace" # namespace where runners are going to be created
strategy: "percentageRunnersIdle" # in the future more strategies will be supported
parameters:
min: 1 # min number of runners - recommended at least 1
max: 5 # max number of runners
scaleUpThreshold: 0.5 # The percentage of busy runners at which the number of desired runners are re-evaluated to scale up
scaleDownThreshold: 0.2 # The percentage of busy runners at which the number of desired runners are re-evaluated to scale up
scaleUpMultiplier: 1.5 # scaleUpMultiplier > 1
scaleDownMultiplier: 0.5 # 0 < scaleDownMultiplier < 1
- name: "Group2" # Name of the Runner displayed in the Bitbucket Runner UI.
workspace: "my-workspace" #
# repository: "my guid" # TODO - Optional. Replace the repo guid - If specified, it will create repository runners.
labels: # runner will be created with the following labels
- "self.hosted"
- "group2"
namespace: "my-namespace" # namespace where runners are going to be created
strategy: "percentageRunnersIdle" # in the future more strategies will be supported
parameters:
min: 1 # min number of runners - recommended at least 1
max: 3 # max number of runners
scaleUpThreshold: 0.5 # The percentage of busy runners at which the number of desired runners are re-evaluated to scale up
scaleDownThreshold: 0.2 # The percentage of busy runners at which the number of desired runners are re-evaluated to scale up
scaleUpMultiplier: 1.5 # scaleUpMultiplier > 1
scaleDownMultiplier: 0.5 # 0 < scaleDownMultiplier < 1
Here's a screenshot of the runners status in bitbucket cloud:
Another extreme case: when I made a mistake with secrets mounting to runners, which caused no runners could run correctly. I observed lots of runners were triggered in both bitbucket cloud and k8s cluster (Same groups settings as above test). See how many pages of runners in bitbucket cloud. I'll need a script to clean them up quickly.
@Aaron_Luo hi.
Starting from version 1.8.0 we implemented cleaner, that will automatically delete orphaned jobs and secrets. More information about cleaner you could find at README Cleaner section.
If have you any questions about the cleaner implementation feel free to ask them.
Regards, Igor
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.