I'm attempting to expose test results for automated test reporting of a nodejs app build into a docker. Here's the yml
options: docker: true pipelines: default: - step: script: - export IMAGE_NAME=$DOCKER_HUB_USERNAME/currency-updater:$BITBUCKET_COMMIT - docker build -t $IMAGE_NAME . - docker login --username $DOCKER_HUB_USERNAME --password $DOCKER_HUB_PASSWORD - docker push $IMAGE_NAME - step: script: - export IMAGE_NAME=$DOCKER_HUB_USERNAME/currency-updater:$BITBUCKET_COMMIT - docker login --username $DOCKER_HUB_USERNAME --password $DOCKER_HUB_PASSWORD - docker run -v="$BITBUCKET_CLONE_DIR/test-results:/app/test-results" -w="./app" $IMAGE_NAME run-script test-junit
When step 2 executes, it complains that it has some permissions limitations
docker run -v="$BITBUCKET_CLONE_DIR/test-results:/app/test-results" -w="./app" $IMAGE_NAME run-script test-junit
docker: Error response from daemon: authorization denied by plugin pipelines: Command not supported.
See 'docker run --help'.
even if the volume is located inside clone dir as requested by guidelines. If I run the same commands locally (with the needed adaptations) it obviously runs.
I incorrectly posted a discussion while I wanted to post a question. Could admin please remove the discussion and mark this one as not SPAM?
What if I need to share composer cache from docker container?
Pipelines can cache composer files from itself, but if I user docker, I should create volumes mapping
So I can't use normal bitbucket pipelines cache (https://confluence.atlassian.com/bitbucket/caching-dependencies-895552876.html) like
No other way instead of using custom cache and putting it in subfolder, like this?
Does the "test-results" directory already exist when you use the "docker run" command?
Docker run is unable to create directories in Bitbucket Pipelines as that requires some escalated privileges we cannot expose for security purposes. (Whereas locally it is able to do so without much hassle.)
To resolve this, you should be able to just do the following:
- mkdir $BITBUCKET_CLONE_DIR/test-results
- docker run -v="$BITBUCKET_CLONE_DIR/test-results:/app/test-results" -w="./app" $IMAGE_NAME run-script test-junit
Does that work for you?
Thank you for your support.
Unfortunately, creating directory does not solve the problem.
- mkdir -p $BITBUCKET_CLONE_DIR/test-results
- docker run -name mysql -v="$BITBUCKET_CLONE_DIR/test-results:/app/test-results" -d -p 3327:3306 mysql:5.7.18
Using this script I receive this error :
+ docker run --name mysql -v="$BITBUCKET_CLONE_DIR/test-results:/app/test-results" -d -p 3327:3306 mysql:5.7.18
Error response from daemon: authorization denied by plugin pipelines: Command not supported.
Could you please advise how to solve it?
Thank you in advance!
I had a quick chat with my teammates. Right now you can *only* mount the $BITBUCKET_CLONE_DIR. None of the subdirectories.
So instead you will need to run something like this:
- docker run -name mysql -v="$BITBUCKET_CLONE_DIR:/app" -d -p 3327:3306 mysql:5.7.18
Let me know if that works.
I've opened an internal issue for allowing the subdirectories as well.
I have the same issue:
docker: Run command: docker run -v /opt/atlassian/pipelines/agent/build/packer-tmp/packer-docker516303980:/packer-files -d -i -t alpine:3.6 /bin/sh
docker: Error running container: Docker exited with a non-zero exit status.
docker: Stderr: docker: Error response from daemon: authorization denied by plugin pipelines: Command not supported.
Is the generation of the tool done in the docker container? Or in your regular build container? If it's the build container you should be good. Docker container then not so good.
As far as I know there's no immediate plans to remove the current restrictions.
I'll raise this with the PMs, just so they know about the frequency of this being an issue.
The only real workaround to getting additional auth on your docker daemon right now is to run your own EC2 instance (or equivalent virtual compute from another cloud provider) with a regular docker daemon, and hook up Pipelines to that daemon with docker-machine (https://docs.docker.com/machine/overview/).
This works half the time lol.
I tried using this feature today, and my pipeline will work the first time, and not work the next time. As ridiculous as that sounds, you have to see my pipeline logs to believe it. I can't believe it myself.
Firstly, why does bitbucket pipeline allow us to create named volumes, but to use said volumes in the pipeline, we are presented with meaningless errors?