Come for the products,
stay for the community

The Atlassian Community can help you and your team get more value out of Atlassian products and practices.

Atlassian Community about banner
4,295,491
Community Members
 
Community Events
165
Community Groups

Can I keep Docker images between Pipeline steps?

Hi, I am using the Pipeline to automate our builds and I've been struggling with the usage of steps and images. I am aware that I can keep generated files between steps using the artifacts option, but I can't seem to figure out how to keep docker images between steps.

My setup is currently like this:

pipelines:
custom:
dev:
- step:
  script:
- # docker build
- # push to GCR
- # push to AWS ECR

 What I want is something like:

pipelines:
custom:
dev:
- step:
  script:
- # docker build
artifacts:
- dist/**
- step:
script:
- # push to GCR
- step:
script:
- # push to AWS ECR 

Assume my credential configs are correct.

The problem is, tags created during the first step are not available on the next steps. So if I run docker build -t ${aws_ur}:${BITBUCKET_COMMIT} on the first step and then run docker push ${aws_url} on the last step the image will not exist (same for second step).

Am I doing something wrong (maybe the artifacts folder is wrong) or is there a way to do this?

Thanks in advance.

4 answers

For our project we were able to use docker save/load to share an image between steps.

- step:
  name: Build docker image
  script:
    - docker build -t "repo/imagename" .
    - docker save -output tmp-image.docker repo/imagename
  artifacts:
    - tmp-image.docker
- step:
  name: Deploy to Test
  deployment: test
  script:
   - docker load --input ./tmp-image.docker
   - docker images
   # repo/imagename should be available now

Syntax correction:

- docker save --output

Like # people like this

doesnt work anymore :(

worked for us, thanks

Like Julian Bertram likes this

Hey guys. 

This solutions seems to work but I face some issues once I try to use the loaded image in the following steps within another docker build via

COPY --from=<imageTag>...

I always get following error:

Error processing tar file(exit status 1): Container ID 166537 cannot be mapped to a host ID

I already tried to use --chown=root:root option on COPY and a chown root:root on the first docker build of the <imageTag> taged image. Both without success. Does anybody have an idea? Is it just not possible to use the images between different docker build steps? 

I figured it has to be the image size (limited to 1GB or not cacheable). I'd expect Bitbucket to cache all layers it can until 1GB is reached. For instance if I'm extending a PHP Docker image to install a few extensions and libs, I'd expect the base PHP Image to be cached if under 1GB. This is not the case it seems, it considers only the image we're building and mine easily exceeds 1GB since it also contains the codebase, tests and vendors after built.

Would be nice to see Bitbucket caching as many layers as possible before reaching 1GB for ANY image, not only the ones built (for most projects it's a lot more common the need to cache the BASE images we're extending).

Bump. Need an answer on this, or pipelines are useless/dangerous for dockerception builds.

Bump.. I'd love an answer in this one too :)

Suggest an answer

Log in or Sign up to answer
TAGS
Community showcase
Published in Bitbucket

Git push size limits are coming to Bitbucket Cloud starting April 4th, 2022

Beginning on April 4th, we will be implementing push limits. This means that your push cannot be completed if it is over 3.5 GB. If you do attempt to complete a push that is over 3.5 GB, it will fail...

2,169 views 2 9
Read article

Community Events

Connect with like-minded Atlassian users at free events near you!

Find an event

Connect with like-minded Atlassian users at free events near you!

Unfortunately there are no Community Events near you at the moment.

Host an event

You're one step closer to meeting fellow Atlassian users at your local event. Learn more about Community Events

Events near you