Hi there,
Expected result: for pipelines to assume the pod identity associated IAM role and access AWS authenticated resources to be able to publish a docker image to an ECR repository for example.
I'm configuring self-hosted bitbucket runners on AWS EKS with autoscaling using the following guide:
https://support.atlassian.com/bitbucket-cloud/docs/autoscaler-for-runners-on-kubernetes/
The following pods were created successfully:
I was able to configure a Kubernetes Service Account and create its corresponding pod identity association with an IAM role that has the corresponding policies for the runners to execute pipelines accessing other AWS resources.
For example: permissions to publish an image to an ECR repository.
The pod runner-<uuid> has two container instances running:
I can confirm that the service account credentials are present in both containers. e.g. /var/run/secrets/pods.eks.amazonaws.com/serviceaccount/eks-pod-identity-token is there and has the token.
Runners are correctly registered to the bitbucket account, and pipelines are triggering as expected.
The main issue is that the pipelines cannot access the Pod Identity authentication since the docker instance running the pipeline is created on the fly using the docker did sidecar.
That there's no mecanism to mount and not pass along this pod identity token so that the docker instance generated on the spot to run the pipeline gets access to the eks pod identity token from the runner or the docker instance.
Namely, there's no docker_in_docker_args env var (horrible idea, btw) that I can set up in the values.yaml.