Hi,
When you use Docker-in-Docker (DinD) in a Kubernetes Pod:
The Pod (runner) starts a container(dind)that runs dockerd
.
That dockerd
daemon then starts child containers from inside the Pod.
These child containers are not managed by Kubernetes or containerd
.
As a result, they:
Do not appear in kubectl
Are not visible to the scheduler or metrics server
Bypass CPU/memory limits set by Kubernetes
Are not throttled or OOM-killed properly
This is especially problematic with cgroup v2, as Kubernetes (since v1.29) mounts /sys/fs/cgroup
as read-only, preventing DinD from delegating cgroups.
Public issues:
DinD cgroupv2 problem inside K8s · Issue #45378 · moby/moby
We can have limits in the memory and cpu in the runner:
namespace: "bitbucket-runners"
strategy: "percentageRunnersIdle"
parameters:
min: 1
max: 25
scale_up_threshold: 0.5
scale_down_threshold: 0.2
scale_up_multiplier: 1.5
scale_down_multiplier: 0.5
resources:
requests:
memory: "2Gi"
cpu: "0.2"
limits:
memory: "5Gi"
cpu: "0.5"
But the container build starts with the size:4x and it's consuming memory of the worker node of kubernetes outside of the control resources of the k8s or containerd:
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
06c8a0b10c12 4ce5ef69-752e-5964-ab01-bc3a1f26c76b_41441486-e6b8-4ec3-bbe6-368f6af8a087_system_auth-proxy 1.74% 14.05MiB / 1GiB 1.37% 811MB / 9.13MB 0B / 16.4kB 2
ede4c885239c 4ce5ef69-752e-5964-ab01-bc3a1f26c76b_41441486-e6b8-4ec3-bbe6-368f6af8a087_pause 0.00% 304KiB / 30.64GiB 0.00% 811MB / 9.13MB 0B / 0B 1
a645810f4539 4ce5ef69-752e-5964-ab01-bc3a1f26c76b_41441486-e6b8-4ec3-bbe6-368f6af8a087_build 11.54% 10.47GiB / 16GiB 65.43% 811MB / 9.13MB 109MB / 4.14GB 459
We need to launch several parallel steps that consume a lot of memory, such as 16 maximum.
is there any solution for that?
Thanks.