I am attempting to use a Bitbucket Pipeline Runner executing as an AWS ECS Task hosted within ECS-EC2 and bound to an IAM Role for the purpose of eliminating the storage of AWS credentials of any kind within Bitbucket. I feel like I am very close to a solution but hung up on seemingly not being able to pass a run-time variable from the outer Pipeline Runner container to the internal container executing a "Step".
The fundamental problem is that the "inner" container executing a Step is not running under the auspices of the IAM role like the outer container orchestrating the Pipeline is. The basis of a solution, however, could take shape in having the preamble of a Script of a Step retrieve such credentials by curl'ing a standard endpoint with the "task credential ID" which is exposed as an environment variable within the outer ECS Task.
I have POC'd this by doing three things...
Firstly, by making the Entrypoint and Command of the ECS Task be respectively ["bash", "-c"] and ["echo $AWS_CONTAINER_CREDENTIALS_RELATIVE_URI && /opt/atlassian/pipelines/runner/entrypoint.sh"]
Secondly, by capturing the echo'd value of AWS_CONTAINER_CREDENTIALS_RELATIVE_URI from CloudWatch Logs and plugging it into my Bitbucket Pipeline as an environment variable...
And lastly by retrieving short-lived credentials for the bound IAM Role in the following way inside a Step's Script...
- CREDS=`curl 169.254.170.2$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI`
- export AWS_ACCESS_KEY_ID=`echo $CREDS | jq -r .AccessKeyId`
- export AWS_SECRET_ACCESS_KEY=`echo $CREDS | jq -r .SecretAccessKey`
- export AWS_SESSION_TOKEN=`echo $CREDS | jq -r .Token`
... after which an invocation of "aws s3 ls" validates the successful retrieval of credentials.
So I have demonstrated that it is possible for the inner container to inherit the credentialing of the IAM Role bound to the outer container if only I can somehow inject the AWS_CONTAINER_CREDENTIALS_RELATIVE_URI environment variable present in the outer container into the inner container.
Sadly the entrypoint.sh included in the docker-public.packages.atlassian.com/sox/atlassian/bitbucket-pipelines-runner:1 container image does not appear to have any invocation option that facilitates such pass-through. Furthermore, the way the pipeline runner configures the docker daemon's Access Authorization Plugin appears to preclude such solutions as either running a web server (netcat) within the outer container that the inner container can query or having the outer process inject a config/secret/whatever into the docker daemon that the inner container can then read.
Is there _any_ way that I can contrive to pass just a single string value from the outer container to the inner container? That is all I need to accomplish my goal -- having pipeline steps run with IAM Role credentials instead of needing the static credentials of an IAM User.
I note that others are generally eager for an environment variable pass-through feature...
... and that a ticket has been opened for this feature request...
https://jira.atlassian.com/browse/BCLOUD-21523
I furthermore note that others are specifically interested in solving for my use case of leveraging IAM Role derived credentials and seemingly suffering the lack of a solution...
Can anyone offer advice/updates on whether either the environment variable pass-through faculty may soon be implemented, or if there is any way at all to inject a single string value into the inner Step container I could use for the credential retrieving I demo'd above, or if there is some other way altogether to crack the nut of getting the inner container access to IAM Role credentialing?
I have to imagine that solving this problem would be extremely valuable for a variety of Atlassian's customers.
For reference: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-iam-roles.html
@Andrew Gibbs Faced with the same situation, an approach can be taken by creating a privileged ECS cluster using EC2 instances where the bitbucket runners inherit the role of the EC2 instance using the IMDSv2 metadata endpoint.
It is important to note that security wise this approach will mean that any bitbucket runners will inherit these roles, due to this, a separate runner ECS stack should be allocated and EC2 IAM permissions granted to the permission level required for these runners.
To setup, deploy an ECS cluster and grant the auto scaling role the IAM permissions you wish the runner to inherit. In the examples below a auto scaling role named ecs-asg-role has been used.
The following is using CDK to add the user data command to action the following:
multipartUserData.addCommands(
'yum install -y httpd.x86_64',
'echo IwojIGltZHMtcHJveHk6IHRoaXMgY29uZmlndXJhdGlvbiBhbGxvd3MgZm9yIGEgY29udGFpbmVyIGluIHRoZSBkb2NrZXIKIyBuZXR3b3JrIHRvIHF1ZXJ5IHRoZSBkb2NrZXIgbmV0d29yayBob3N0IGFkZHJlc3MgMTcyLjE3LjAuMSB3aGljaCB3aWxsIHByb3h5CiMgdGhyb3VnaCB0byB0aGUgYXdzIGltZHMgbWV0YWRhdGEgc2VydmljZSBmb3IgdGhlIGNvbnRhaW5lciB0byByZXRyaWV2ZQojIGNyZWRlbnRpYWxzIHRvIGluaGVyaXQgdGhlIGVjMiByb2xlLgojIAo8TG9jYXRpb24gLz4KICBEZW55IGZyb20gYWxsIAogIEFsbG93IGZyb20gMTcyLjE3CiAgIyBSZWRpcmVjdCBhbGwgcmVxdWVzdHMgdG8gdGhlIEFXUyBtZXRhZGF0YSBlbmRwb2ludAogIFByb3h5QWRkSGVhZGVycyBPZmYKICBQcm94eVBhc3MgaHR0cDovLzE2OS4yNTQuMTY5LjI1NC8KICBQcm94eVBhc3NSZXZlcnNlIGh0dHA6Ly8xNjkuMjU0LjE2OS4yNTQvCjwvTG9jYXRpb24+Cg== | base64 -d > /etc/httpd/conf.d/imds-proxy.conf',
'systemctl restart httpd',
);
Base64 decoded the imds-proxy.conf contains the following:
#
# imds-proxy: this configuration allows for a container in the docker
# network to query the docker network host address 172.17.0.1 which will proxy
# through to the aws imds metadata service for the container to retrieve
# credentials to inherit the ec2 role.
#
<Location />
Deny from all
Allow from 172.17
# Redirect all requests to the AWS metadata endpoint
ProxyAddHeaders Off
ProxyPass http://169.254.169.254/
ProxyPassReverse http://169.254.169.254/
</Location>
When the ECS cluster is online this will mean that containers can then query the EC2 endpoint to retrieve the credentials granted to this instance.
Within the bitbucket-pipelines.yml create a new step create-aws-credentials:
- step: &create-aws-credentials
name: create aws credentials
runs-on:
- 'self.hosted'
script:
- apt-get update && apt-get install -y curl jq
# Retrieve the ec2 credentials from the ecs host which is running an imds proxy to 169.254.169.254
- export TOKEN=$(curl -X PUT http://172.17.0.1/latest/api/token -H X-aws-ec2-metadata-token-ttl-seconds:21600)
- export OUTPUT=$(curl -H "X-aws-ec2-metadata-token:$TOKEN" http://172.17.0.1/latest/meta-data/iam/security-credentials/ecs-asg-role)
- echo "export AWS_ACCESS_KEY_ID=$(echo $OUTPUT | jq -r '.AccessKeyId')" >> aws-credentials.sh
- echo "export AWS_SECRET_ACCESS_KEY=$(echo $OUTPUT | jq -r '.SecretAccessKey')" >> aws-credentials.sh
- echo "export AWS_SESSION_TOKEN=$(echo $OUTPUT | jq -r '.Token')" >> aws-credentials.sh
artifacts:
download: false
paths:
- aws-credentials.sh
This step will call the docker host endpoint which is the EC2 instance within the ECS cluster which proxies through to IMDSv2 and retrieves the credentials and writes to an aws-credentials.sh file available for other steps.
In future steps in the script use:
script:
# Set the AWS credentials
- source aws-credentials.sh
This will then set the AWS credentials in the shell ready for communication to AWS with the EC2 IAM permissions.
Hi Andrew,
As you have figured out, it is not possible at the moment for the runner to access the host's environment variables. I don't have an update for an ETA, but if you haven't done so already, I would recommend upvoting the feature request https://jira.atlassian.com/browse/BCLOUD-21523 (by selecting the Vote for this issue link) as the number of votes helps the product managers to better understand the demand for new features. You can also add yourself as a watcher (by selecting the Start watching this issue link) if you'd like to get notified via email on updates.
Can you store the value of AWS_CONTAINER_CREDENTIALS_RELATIVE_URI as a repository or workspace variable (which are available in Pipelines runners builds)? Or is there a reason why this is not a solution for you?
Considering that the runners cannot access the host's files or variables, the two workarounds I can think of would be:
- using a Repository or Workspace variable
- storing this value in a file in the host system and use sftp to access that file
Kind regards,
Theodora
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Hi Theodora,
Voting up the referenced issue was one of the first things I did and I note that it now has several votes on it. Could you please provide an update on the likelihood we may see this implemented soon?
FWIW -- Neither the "repository or workspace variable" nor the "sftp" approaches are workable here and I will elaborate on why.
The execution environment lifecycle involves...
1) an EC2 running as part of an ECS cluster that...
2) spins up an instance of an ECS Task whose underpinning container image is the stock Bitbucket Runner container and whose entrypoint involves running the standard /opt/atlassian/pipelines/runner/entrypoint.sh that accepts build jobs who are in turn run under...
3) yet another internal container (whose image is specified by pipeline Steps) that if it knows the value of an environment variable that is available in and unique to (2) then it is able to obtain credentials that correspond to the IAM Role bound to to (2)
We can't use either of your recommendations because:
A) this environment variable is unique to the ECS Task instance, which means that not only is it volatile but also unique to an _instance_ of which there may be a great many in a cluster, ruling out a single and static variable
B) the way that Atlassian built its code running in (2) the container running in (3) is too locked down for it to access network resources running on (2); furthermore the EC2 of (1) is not a great place to home such an FTP server because there may in turn be multiple instances of it managed by an auto-scaling group and thus an instance of (3) would have no way to find it apart from perhaps slogging through a chaotic many-to-many-to-many set of relationships with an ever growing collection of dead entries
As an incredibly kludgy solution to create something workable I created an S3 bucket whose only access policy constraint is attachment to the VPC where our runners are executing and then have every instance of (2) write its copy of the environment variable there which then creates junk drawer full of possible values that could be used. Of course, although this demonstrates a possible automated solution to the problem, the solution grows increasingly intractable the more tasks run, I'm not yet sure this is particularly secure, and I furthermore have not figured out a way to _remove_ the environment variable value instance from said "junk drawer" when the runner terminates. In essence it is barely functionallybetter than the unworkable "sftp" solution.
All because we don't have a way to tell the "entrypoint.sh" script simply to always pass the current value of an environment variable to the internal-most container...
Which feels like it would be a ~10 line code patch...
But as far as I can tell the source code is not public otherwise I might have already written such a patch...
How do we break this impasse? This seemingly very tiny issue is having huge implications that potentially threaten the long term viability of BitBucket usage at the company on whose behalf I have submitted this issue. And given the ticket traffic I have seen from other folks on this site I have to imagine that the same is true at many of your customers who are trying to harden their security posture but bumping up against this limitation.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Hi Andrew,
Thank you for taking the time to provide such detailed feedback. I am not aware of any other workaround, but I have passed along your use case and feedback to our developers and product managers; they will check if there is another way to achieve what you want. I will let you know as soon as I have an update.
Kind regards,
Theodora
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.