I am looking for some advice on the best practices for automating the movement of code through the process, to storage, to the different environments.
I currently have a pipeline setup to sync the dev, stage and prod branches to AWS S3, into their respective buckets, but I have run into several kinks along the way.
My goal is to go from Bitbucket -> S3 -> To version -> EC2 Server Instance
Problems I am encountering:
Zip fails in the different images, including apt-get update and install, so I am just sending all of the files across with sync. I am 99% sure this is not the best practice here.
- aws s3 --region "us-west-2" rm s3://artefacts/development/
- aws s3 sync --delete . s3://artefacts/development/
While this gets everything to the S3 bucket, in a my own janky way, I am struggling with getting them to the server.
Do I create a step in the pipeline for this? If so, how doe sit securely talk to the server and kick off the copy? This interaction seems logical to me, but not sure how to go about it. AWS CLI is on the servers (windows).
As far as the pipeline branches are concerned, I am using 1 IAM role and user for all 3 branches/buckets. I feel that I should be using multiple IAM roles, each specific to the branch, then additional roles specific to the bucket, so if someone with elevated privileges changes the yaml file, that it will break, even if the branches are modified. (this one troubles me since I do not see Branch-Specific credentials in the "Environment Variables". I just added in the singe set of variables and it works on all 3, so even if I do create multiple, how does one distinguish per step, since I am not even adding them to the script at all now?
I am brand new to this, so I am sure I am overlooking the basics, but I would love help in finding the best practices for this particular scenario.
Thanks in advance for any help!