We are using Docker in our pipelines. Usually we build an image (image A) with buildkit using buildkit inline cache.
Afterwards we run commands using the built image. Using the default docker cache this works very nicely as the image is usually cached which is what we want.
Now we started to use an external docker image (image B) which is very big, far too big to ever fit into the cache. Now there's the problem:
The question is: How can we use docker caching but prevent trying to send image B to the cache?
G'day, @xmoex
Welcome to the community!
If I understand this correctly, you don't want the pipeline to cache image B because it's too big and you wanted to see if it's possible to skip them. If yes, I believe it's not possible to do so, one things you can consider perhaps splitting them into two steps for example:
pipelines: default: - step: name: Build and Cache Image A caches: - docker script: - export DOCKER_BUILDKIT=1 - docker build --build-arg BUILDKIT_INLINE_CACHE=1 -t myorg/image-a:latest . - docker push myorg/image-a:latest - step: name: Use Image A and Image B script: - docker pull - docker pull myorg/image-a:latest - docker pull external-registry/image-b:latest - docker run myorg/image-a:latest command-to-run - docker run external-registry/image-b:latest another-command-to-run
Regards,
Syahrul
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.