Create
cancel
Showing results for 
Search instead for 
Did you mean: 
Sign up Log in

How to use a docker build in multiple steps?

Bibin V Joseph July 6, 2023

In a pipeline yml i need total of two step In first step i want to build a docker images and push to a ecr repo in aws account 1. In second step i need the same image build on the first step to be pushed to another ecr repo in aws account 2

2 answers

1 accepted

2 votes
Answer accepted
Oleksandr Kyrdan
Atlassian Team
Atlassian Team members are employees working across the company in a wide variety of roles.
July 7, 2023

Hi @Bibin V Joseph 

Thank you for your question!

It's a good DevOps practice to follow "build once and deploy many".

So, the solution could be:

step:
script:
# build the image
- docker build -t $IMAGE_NAME:$IMAGE_VERSION -t $IMAGE_NAME:latest .
- docker save --output my-docker-image.tar.gz $IMAGE_NAME
services:
- docker
artifacts:
- my-docker-image.tar.gz
step:
script:
# load previously saved image that will available as an artifact
docker load --input docker-image.tar.gz
# use the pipe to push the image to AWS ECR
- pipe: atlassian/aws-ecr-push-image:2.0.0
variables:
AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID_1
AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY_1
AWS_DEFAULT_REGION: $AWS_DEFAULT_REGION_1
IMAGE_NAME: $IMAGE_NAME
services:
- docker
step:
script:
# load previously saved image that will available as an artifact
docker load --input docker-image.tar.gz
# use the pipe to push the image to AWS ECR
- pipe: atlassian/aws-ecr-push-image:2.0.0
variables:
AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID_2
AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY_2
AWS_DEFAULT_REGION: $AWS_DEFAULT_REGION_2
IMAGE_NAME: $IMAGE_NAME
services:
- docker

Best regards,
Oleksandr Kyrdan

Bibin V Joseph July 7, 2023

Thank You

0 votes
Aron Gombas _Midori_
Community Leader
Community Leader
Community Leaders are connectors, ambassadors, and mentors. On the online community, they serve as thought leaders, product experts, and moderators.
July 6, 2023

Because you want to store the build artifact (a Docker image) externally, there is no real difficulty here.

Notes:

  1. Because you need to work with 2 separate AWS accounts, you will need a separate credentials configure for each. But that can be managed with Pipeline variables (secrets).
  2. Use the AWS client library to upload the Docker image to ECS in step 1.
  3. Use the AWS client library to download the Docker image from ECS and then upload to the other AWS account in step 2.

Alternatively, you could try the Docker image as a temporary Bitbucket download. (We use this technique.)

Bibin V Joseph July 7, 2023

image: atlassian/default-image:2

definitions:
docker:
memory: 7128
caches:
docker: /root/.docker
steps:
- step: &build-and-push-STG
size: 2x
name: "Build and push to STG"
script:
- VERSION="${BITBUCKET_BUILD_NUMBER}"
- docker build -t name .
- pipe: atlassian/aws-ecr-push-image:1.6.2
variables:
AWS_DEFAULT_REGION: 'Region'
AWS_ACCESS_KEY_ID: '${EKS_NP_ACCESS_KEY_ID}'
AWS_SECRET_ACCESS_KEY: '${EKS_NP_SECRET_ACCESS_KEY}'
IMAGE_NAME: "Name"
TAGS: '${VERSION} latest'
ECR_REPOSITORY_URL: 'xxxxx.dkr.ecr.ca-central-1.amazonaws.com/repo'
services:
- docker

- step: &push-to-PRD
name: "Push to ECR 2"
script:
- echo "Pushing to ECR 2"
- pipe: atlassian/aws-ecr-push-image:1.6.2
variables:
AWS_DEFAULT_REGION: 'Region'
AWS_ACCESS_KEY_ID: '${AWS_ACCESS_KEY_ID}'
AWS_SECRET_ACCESS_KEY: '${AWS_SECRET_ACCESS_KEY}'
IMAGE_NAME: "Name"
TAGS: '${BITBUCKET_BUILD_NUMBER} latest'
ECR_REPOSITORY_URL: 'xxxxxx.dkr.ecr.ca-central-1.amazonaws.com/repo'

services:
- docker
pipelines:
custom:
Deploy-Pipeline:
- step: *build-and-push-STG
caches:
- docker
- step: *push-to-PRD
trigger: manual

Bibin V Joseph July 7, 2023

First steps works fine for me , but the second step give error
unable to find the image Name.

Suggest an answer

Log in or Sign up to answer
DEPLOYMENT TYPE
CLOUD
TAGS
AUG Leaders

Atlassian Community Events