Create
cancel
Showing results for 
Search instead for 
Did you mean: 
Sign up Log in
Deleted user
0 / 0 points
Next:
badges earned

Your Points Tracker
Challenges
Leaderboard
  • Global
  • Feed

Badge for your thoughts?

You're enrolled in our new beta rewards program. Join our group to get the inside scoop and share your feedback.

Join group
Recognition
Give the gift of kudos
You have 0 kudos available to give
Who do you want to recognize?
Why do you want to recognize them?
Kudos
Great job appreciating your peers!
Check back soon to give more kudos.

Past Kudos Given
No kudos given
You haven't given any kudos yet. Share the love above and you'll see it here.

It's not the same without you

Join the community to find out what other Atlassian users are discussing, debating and creating.

Atlassian Community Hero Image Collage

CI / CD Automation with Pipelines

I am looking for some advice on the best practices for automating the movement of code through the process, to storage, to the different environments.

 

I currently have a pipeline setup to sync the dev, stage and prod branches to AWS S3, into their respective buckets, but I have run into several kinks along the way.

My goal is to go from Bitbucket -> S3 -> To version -> EC2 Server Instance

Problems I am encountering:

Zip fails in the different images, including apt-get update and install, so I am just sending all of the files across with sync. I am 99% sure this is not the best practice here.

========

development:

   - step:

        script:
          - aws s3 --region "us-west-2" rm s3://artefacts/development/

          - aws s3 sync --delete . s3://artefacts/development/

========

While this gets everything to the S3 bucket, in a my own janky way, I am struggling with getting them to the server.

Do I create a step in the pipeline for this? If so, how doe sit securely talk to the server and kick off the copy? This interaction seems logical to me, but not sure how to go about it.  AWS CLI is on the servers (windows).

As far as the pipeline branches are concerned, I am using 1 IAM role and user for all 3 branches/buckets. I feel that I should be using multiple IAM roles, each specific to the branch, then additional roles specific to the bucket, so if someone with elevated privileges changes the yaml file, that it will break, even if the branches are modified. (this one troubles me since I do not see Branch-Specific credentials in the "Environment Variables". I just added in the singe set of variables and it works on all 3, so even if I do create multiple, how does one distinguish per step, since I am not even adding them to the script at all now?

I am brand new to this, so I am sure I am overlooking the basics, but I would love help in finding the best practices for this particular scenario.

Thanks in advance for any help!

 

 

 

0 comments

Comment

Log in or Sign up to comment
TAGS
Community showcase
Published in Bitbucket

Calling any interview participants for Bitbucket Data Center

Hi everyone,  We are looking to learn more about development teams’ workflows and pain points, especially around DevOps, integrations, administration, scale, security, and the related challeng...

652 views 7 4
Read article

Community Events

Connect with like-minded Atlassian users at free events near you!

Find an event

Connect with like-minded Atlassian users at free events near you!

Unfortunately there are no Community Events near you at the moment.

Host an event

You're one step closer to meeting fellow Atlassian users at your local event. Learn more about Community Events

Events near you