We're running self hosted runners in our EKS clusters. When a pipeline/job is done and finished successfully, unnecessary pods are marked to be terminated by controller-cleaner as expected, which kills the runner container properly:
│ [2025-12-15 13:06:54,451] {"traceId":"694000c3ac4db07c4bc8edace67f1095","parentId":"4bc8edace67f1095","id":"197ff9fa27bc06f2","kind":"CLIENT"," │
│ [2025-12-15 13:06:54,454] Runner complete. │
│ [2025-12-15 13:06:54,455] Shutdown completed │
│ stream closed: EOF for infra-bitbucket-runners/runner-2ec9bed7-ff5c-5d15-b987-db96cf3129a8-hb4qq (runner)
but DinD container keeps lingering around;
due to this my runner pod doesn't get killed and keep piling up in our clusters.
i've tried to add a lifecycle rule to job template as:
but it didn't help at all 😕
Does anyone else have this problem?
I'm using the stock project template from here: https://bitbucket.org/bitbucketpipelines/runners-autoscaler/src/master/config/runners-autoscaler-cm-job.template.yaml
Any help is appreciated