I want to do a docker compose build and the related Dockerfile is a multi staged dockerfile.
But I'm trapped here with the limits of bitbucket pipelines.
For using docker compose build, i have to disable docker buildkit (according to this thread), but for using multi stage dockerfiles i have to enable docker buildkit (according to this thread).
Is there something i miss here? This is really a state-of-the-art way to do it and it would be cool if we could use state-of-the-art processes in our ci pipeline.
Hi Marcus,
For Pipelines builds running on Atlassian infrastructure, we've had to restrict certain Docker commands and options for security reasons:
Certain Docker BuildKit features have also been disabled:
These restrictions don't apply to builds running with a self-hosted runner, so if you'd like to use certain restricted commands you can use a self-hosted runner for a specific step. If you use a self-hosted Linux Docker runner, you will need to use a custom dind image:
If you'd like advice specific to your use case, I'll need to see the content of the Dockerfile, the docker-compose.yml file, what Docker commands you are running in your bitbucket-pipelines.yml file and what is the output of any failed commands during the build.
If you don't feel comfortable sharing that here, you can create a ticket with the support team and share the URL of a failed Pipelines build so we can look into it. You can create a ticket via https://support.atlassian.com/contact/#/, in "What can we help you with?" select "Technical issues and bugs" and then Bitbucket Cloud as product. When you are asked to provide the workspace URL, please make sure you enter the URL of the workspace that is on a paid billing plan to proceed with ticket creation.
Kind regards,
Theodora
I have to revert my message. Even on self hosted runners we have this problem:
Here is the Dockerfile:
ARG NODE_VERSION=22.11.0
FROM node:${NODE_VERSION}-alpine AS build
WORKDIR /home/node
COPY . .
RUN npx nx build aicobe --configuration=production
FROM node:${NODE_VERSION}-alpine
ARG NODE_OPTIONS
ENV NODE_OPTIONS="${NODE_OPTIONS}"
ENV NODE_ENV=production
ARG APP_VERSION="dev"
ENV APP_VERSION="${APP_VERSION}"
WORKDIR /home/node
COPY --from=build /home/node/dist/apps/aicobe .
RUN npm ci --legacy-peer-deps \
&& rm package*.json
USER node
EXPOSE 3000
ENTRYPOINT ["node", "main"]
and here the related part of the docker compose
aicobe:
image: ${DOCKER_REG_URL}/${DOCKER_PROJECT}/aicobe:latest
build:
dockerfile: apps/aicobe/Dockerfile
context: ../../../
So when we activate docker buildkit we get this error:
Error response from daemon: authorization denied by plugin pipelines: --privileged=true is not allowed
and when we deactivate it we get this error:
failed to copy files: failed to copy directory: Container ID 166536 cannot be mapped to a host ID
for the line in the Dockerfile
Step 13/18 : COPY --from=build /home/node/dist/apps/aicobe .
So you say the only solution for this is to use the dind image within the self-hosted runner, right?
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Hi @Marcus,
Indeed, you need to use custom dind image for the docker service with the self-hosted Linux Docker runner, otherwise you will run into the same restrictions as cloud-based pipelines.
If you use a docker service only on the self-hosted step in your bitbucket-pipelines.yml file, you can check the configuration here:
If you want to use a docker service both on the self-hosted step and also on another step that runs on Atlassian's infrastructure, you can use a configuration like the below:
definitions:
services:
docker-dind:
image: docker:dind
type: docker
pipelines:
default:
- step:
runs-on:
- 'self.hosted'
- 'my.custom.label'
services:
- docker-dind
script:
- docker version
- step:
services:
- docker
script:
- docker info
Instead of using the default docker service, I define here a docker service named docker-dind for the self-hosted step and the cloud-based step uses the default docker service. The reason is that if we use a custom dind image for the default docker service, the cloud-based step will give an error, as it cannot use a custom dind image.
About the second error you mentioned, the following knowledge base article has possible causes and solutions:
Please feel free to let me know if you have any questions or if you're still running into any issues.
Kind regards,
Theodora
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.