Hello, I have configured a bitbucket pipeline to install AWS CLI, login to ECR, build an image, and push it to ECR. This pipeline will be almost identical for another 20 or so repositories. So it would be nice to refactor all those steps into our own container. What I'm not sure is whether the docker image build step is possible. Basically we would have:
image: private-image:tag
- login stuff
pipelines:
default:
- step:
services:
- docker
script:
- VAR_A=...
- run script from private-image
and the script in private-image would do everything:
- login to ECR
- build image
- push image to ECR
It is not clear to me, based on documentation, whether the script is run inside private-image or yet a separate image (I saw docs saying that each "step" is run in a separate docker image).
Any help much appreciated!
TLDR: The step uses the image of the step (step-image). Period.
There is some preparation work done before for a pipeline (~2016), like cloning the source code repository so that it can be mounted into the container (of the step-image), but the part you're specifically interested in is whether or not the step-script is run inside "your container", the container you intend to use (outlined in your question).
That is the case. Go for it and build it!
There are also some mix-ins like when you put a pipe (~2019) into a step script which will run in a different image, but the base remains that:
I saw docs saying that each "step" is run in a separate docker image
it's not a separate image from your perspective (I guess a bit here) in the meaning that is separate from your image, but (at least how I read it before you even asked five days ago) meant that it is that exact image you reference in a step ("<pipeline>.<step>.image" property [if that property is not set, it's the default image - either named in the file or the default image of pipelines which is provided by atlassian which contains a plethora of utilities - ref])
I hope this is not too much text. In the end try it out and see if it works for you, that is most important.
Indeed in case useful to useful to others:
each step starts a new container; this is why steps can currently only share data via artifacts or (if static data) via yaml re-use mechanisms (aliases and anchors).
Note also if you override the image at the top of the pipelines file, it is used in every step, unless the step itself overrides the docker image.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.