Create
cancel
Showing results for 
Search instead for 
Did you mean: 
Sign up Log in
Celebration

Earn badges and make progress

You're on your way to the next level! Join the Kudos program to earn points and save your progress.

Deleted user Avatar
Deleted user

Level 1: Seed

25 / 150 points

Next: Root

Avatar

1 badge earned

Collect

Participate in fun challenges

Challenges come and go, but your rewards stay with you. Do more to earn more!

Challenges
Coins

Gift kudos to your peers

What goes around comes around! Share the love by gifting kudos to your peers.

Recognition
Ribbon

Rise up in the ranks

Keep earning points to reach the top of the leaderboard. It resets every quarter so you always have a chance!

Leaderboard

Come for the products,
stay for the community

The Atlassian Community can help you and your team get more value out of Atlassian products and practices.

Atlassian Community about banner
4,467,410
Community Members
 
Community Events
177
Community Groups

use IAM role credentials from private runner executing inside AWS ECS-EC2 homed Task

I am attempting to use a Bitbucket Pipeline Runner executing as an AWS ECS Task hosted within ECS-EC2 and bound to an IAM Role for the purpose of eliminating the storage of AWS credentials of any kind within Bitbucket.  I feel like I am very close to a solution but hung up on seemingly not being able to pass a run-time variable from the outer Pipeline Runner container to the internal container executing a "Step".

The fundamental problem is that the "inner" container executing a Step is not running under the auspices of the IAM role like the outer container orchestrating the Pipeline is. The basis of a solution, however, could take shape in having the preamble of a Script of a Step retrieve such credentials by curl'ing a standard endpoint with the "task credential ID" which is exposed as an environment variable within the outer ECS Task.

I have POC'd this by doing three things...

Firstly, by making the Entrypoint and Command of the ECS Task be respectively ["bash", "-c"] and ["echo $AWS_CONTAINER_CREDENTIALS_RELATIVE_URI && /opt/atlassian/pipelines/runner/entrypoint.sh"]

Secondly, by capturing the echo'd value of AWS_CONTAINER_CREDENTIALS_RELATIVE_URI from CloudWatch Logs and plugging it into my Bitbucket Pipeline as an environment variable...

And lastly by retrieving short-lived credentials for the bound IAM Role in the following way inside a Step's Script...

- CREDS=`curl 169.254.170.2$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI`

- export AWS_ACCESS_KEY_ID=`echo $CREDS | jq -r .AccessKeyId`

- export AWS_SECRET_ACCESS_KEY=`echo $CREDS | jq -r .SecretAccessKey`

- export AWS_SESSION_TOKEN=`echo $CREDS | jq -r .Token`

... after which an invocation of "aws s3 ls" validates the successful retrieval of credentials.

So I have demonstrated that it is possible for the inner container to inherit the credentialing of the IAM Role bound to the outer container if only I can somehow inject the AWS_CONTAINER_CREDENTIALS_RELATIVE_URI environment variable present in the outer container into the inner container.

Sadly the entrypoint.sh included in the docker-public.packages.atlassian.com/sox/atlassian/bitbucket-pipelines-runner:1 container image does not appear to have any invocation option that facilitates such pass-through.  Furthermore, the way the pipeline runner configures the docker daemon's Access Authorization Plugin appears to preclude such solutions as either running a web server (netcat) within the outer container that the inner container can query or having the outer process inject a config/secret/whatever into the docker daemon that the inner container can then read.

Is there _any_ way that I can contrive to pass just a single string value from the outer container to the inner container?  That is all I need to accomplish my goal -- having pipeline steps run with IAM Role credentials instead of needing the static credentials of an IAM User.

I note that others are generally eager for an environment variable pass-through feature...

https://community.atlassian.com/t5/Bitbucket-questions/Access-private-runner-host-environment-variables/qaq-p/1859496?tempId=eyJvaWRjX2NvbnNlbnRfbGFuZ3VhZ2VfdmVyc2lvbiI6IjIuMCIsIm9pZGNfY29uc2VudF9ncmFudGVkX2F0IjoxNjQzNjUyODE2NjAyfQ%3D%3D

... and that a ticket has been opened for this feature request...

https://jira.atlassian.com/browse/BCLOUD-21523

I furthermore note that others are specifically interested in solving for my use case of leveraging IAM Role derived credentials and seemingly suffering the lack of a solution...

https://community.atlassian.com/t5/Bitbucket-questions/Bitbucket-runner-does-not-work-with-AWS-EC2-Instance-Iam-Role/qaq-p/1822489

Can anyone offer advice/updates on whether either the environment variable pass-through faculty may soon be implemented, or if there is any way at all to inject a single string value into the inner Step container I could use for the credential retrieving I demo'd above, or if there is some other way altogether to crack the nut of getting the inner container access to IAM Role credentialing?

I have to imagine that solving this problem would be extremely valuable for a variety of Atlassian's customers.

For reference: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-iam-roles.html

 

1 answer

0 votes

Hi Andrew,

As you have figured out, it is not possible at the moment for the runner to access the host's environment variables. I don't have an update for an ETA, but if you haven't done so already, I would recommend upvoting the feature request https://jira.atlassian.com/browse/BCLOUD-21523 (by selecting the Vote for this issue link) as the number of votes helps the product managers to better understand the demand for new features. You can also add yourself as a watcher (by selecting the Start watching this issue link) if you'd like to get notified via email on updates.

Can you store the value of AWS_CONTAINER_CREDENTIALS_RELATIVE_URI as a repository or workspace variable (which are available in Pipelines runners builds)? Or is there a reason why this is not a solution for you?

Considering that the runners cannot access the host's files or variables, the two workarounds I can think of would be:
- using a Repository or Workspace variable
- storing this value in a file in the host system and use sftp to access that file

Kind regards,
Theodora

Hi Theodora,

Voting up the referenced issue was one of the first things I did and I note that it now has several votes on it.  Could you please provide an update on the likelihood we may see this implemented soon?

FWIW -- Neither the "repository or workspace variable" nor the "sftp" approaches are workable here and I will elaborate on why.

The execution environment lifecycle involves...

1) an EC2 running as part of an ECS cluster that...

2) spins up an instance of an ECS Task whose underpinning container image is the stock Bitbucket Runner container and whose entrypoint involves running the standard /opt/atlassian/pipelines/runner/entrypoint.sh that accepts build jobs who are in turn run under...

3) yet another internal container (whose image is specified by pipeline Steps) that if it knows the value of an environment variable that is available in and unique to (2) then it is able to obtain credentials that correspond to the IAM Role bound to to (2)

We can't use either of your recommendations because:

A) this environment variable is unique to the ECS Task instance, which means that not only is it volatile but also unique to an _instance_ of which there may be a great many in a cluster, ruling out a single and static variable

B) the way that Atlassian built its code running in (2) the container running in (3) is too locked down for it to access network resources running on (2); furthermore the EC2 of (1) is not a great place to home such an FTP server because there may in turn be multiple instances of it managed by an auto-scaling group and thus an instance of (3) would have no way to find it apart from perhaps slogging through a chaotic many-to-many-to-many set of relationships with an ever growing collection of dead entries

As an incredibly kludgy solution to create something workable I created an S3 bucket whose only access policy constraint is attachment to the VPC where our runners are executing and then have every instance of (2) write its copy of the environment variable there which then creates junk drawer full of possible values that could be used.  Of course, although this demonstrates a possible automated solution to the problem, the solution grows increasingly intractable the more tasks run, I'm not yet sure this is particularly secure, and I furthermore have not figured out a way to _remove_ the environment variable value instance from said "junk drawer" when the runner terminates.  In essence it is barely functionallybetter than the unworkable "sftp" solution.

All because we don't have a way to tell the "entrypoint.sh" script simply to always pass the current value of an environment variable to the internal-most container...

Which feels like it would be a ~10 line code patch...

But as far as I can tell the source code is not public otherwise I might have already written such a patch...

How do we break this impasse?  This seemingly very tiny issue is having huge implications that potentially threaten the long term viability of BitBucket usage at the company on whose behalf I have submitted this issue.  And given the ticket traffic I have seen from other folks on this site I have to imagine that the same is true at many of your customers who are trying to harden their security posture but bumping up against this limitation.

Hi Andrew,

Thank you for taking the time to provide such detailed feedback. I am not aware of any other workaround, but I have passed along your use case and feedback to our developers and product managers; they will check if there is another way to achieve what you want. I will let you know as soon as I have an update.

Kind regards,
Theodora

Suggest an answer

Log in or Sign up to answer
TAGS

Atlassian Community Events