Showing results for 
Search instead for 
Did you mean: 
Sign up Log in

Earn badges and make progress

You're on your way to the next level! Join the Kudos program to earn points and save your progress.

Deleted user Avatar
Deleted user

Level 1: Seed

25 / 150 points

Next: Root


1 badge earned


Participate in fun challenges

Challenges come and go, but your rewards stay with you. Do more to earn more!


Gift kudos to your peers

What goes around comes around! Share the love by gifting kudos to your peers.


Rise up in the ranks

Keep earning points to reach the top of the leaderboard. It resets every quarter so you always have a chance!


Changes to make your containers more secure on Bitbucket Pipelines.

As a part of the ongoing security hardening process in Bitbucket Pipelines, we decided to enable docker userns remap to eliminate potential security risks that might affect our users.

This change affected a small percentage of people using docker in their builds. So, we wanted to share the 4 different limitations that this change introduced, and the actions required to make your builds work in the more secure system.

Issues caused by limitations of userns remap

1. Docker: network=host no longer supported

As mentioned in the docker documentation linked above, enabling docker userns remap introduces the following limitations:

The following standard Docker features are incompatible with running a Docker daemon with user namespaces enabled:

- sharing PID or NET namespaces with the host (--pid=host or --network=host).

- external (volume or storage) drivers which are unaware or incapable of using daemon user mappings.

- Using the --privileged mode flag on docker run without also specifying --userns=host.

In our previous security updates we disallowed pid=host and --privileged containers however network=host is a new limitation.

As network=host will no longer be allowed, there are a few other ways you can communicate between your code and containers.

To still allow your containers and code to communicate please see the following guidance.

If you need to communicate with a service running in docker from your build container, when starting the service provide it a port mapping using -p <host-port>:<container-port> you can then access the service using localhost:<host-port>.

If you need to communicate from a service running in docker to a service running in your build container, when starting the service provide it the following host entry using --add-host host.docker.internal:$BITBUCKET_DOCKER_HOST_INTERNAL you can then access the service using host.docker.internal:<port>.

If you need to talk between services running in docker, they have to be attached to the same docker network so you can address the services using <container-name>:<container-port> or consider using docker compose to run the services which creates a custom network for the compose stack.

2. Builds fails due to error: “Container ID Cannot Be Mapped to Host ID Error

Due to the way userns remap works (by remapping UID/GIDs to another less privileged UID/GID in the hosts namespace), UID/GIDs that are placed on files must be in the range 0-65535.

If you recieve this error, you will need to perform a fix on the image that has this invalid UID/GID.

If your recieved this error against a file that you created and placed into an image, the fix is to change the UID/GID of the file to one within the range.

To find the file, first get the invalid UID/GID from the error message and run the following command on the files within the container.

find / -uid|-gid <invalid-uid-or-guid> -ls

To fix the ownership of the file use the following command

chown -R <uid-in-range>:<gid-in-range> filename

For a multi-stage build, you may need to modify the command above slightly as shown below

chown -R root:root filename

If you received this error against a file that was in the base image you depend on, you will need to raise a support case and work with the image maintainers of the base image to get this resolved.

3. Test containers

Test containers recently released a fix to disable ryuk in version 1.10.6 by setting the environment variable TESTCONTAINERS_RYUK_DISABLED to true as documented here. In order to keep using test containers in pipelines you will need to upgrade your test containers dependency to this version or greater and set the environment variable in your pipelines (either directly in the step, or via an account/repository variable).

4. Docker build: removing files doesn’t remove them

Due to bugs in docker, when files were modified (removed/alterered) during a docker build via RUN/COPY commands the changes weren't being reflected in subsequent layers.

We have upgraded docker to pull in the fix for these issues.

However some failures require user intervention, as some commands were modifying the UID/GID (similar to the error below). For these situations docker now supports --chown as an argument to some dockerfile commands.


My build fails with error: Container ID 165586 cannot be mapped to a host ID

So, I guess I am falling into the second case. Although I tried your suggestion, I couldn't find any file with id `165586`. 


Any clue?

Like orbjoe likes this

I'm having exactly the same issue @pmatsinopoulos .

Error processing tar file(exit status 1): Container ID 165586 cannot be mapped to a host ID

Like # people like this

If you are using pip to install python packages, try using the --target flag to prevent pip from working in core OS file system spaces with lower privileges:


RUN mkdir -p /src
RUN pip install --no-cache-dir --target=/src -r requirements.txt

What finally worked for me was the following:

I removed from my Dockerfile the update of the pip.

So, instead of:

RUN pip install --upgrade pip pipenv

I did:

Run pip install pipenv

and it worked. Maybe the version of the pip that I was trying to upgrade to had the problem with the file permissions, but didn't have time to look at it in more detail.

Like # people like this

@pmatsinopoulos Your solution is fair and makes sense to my hypothesis: if the base pip of the docker image is compliant with the user namespace mapping, then everything is fine; whereas upgrading in the new image rewrites the directories with underprivileged user outside the legal uid range for userns.  I haven't done a deep enough dive to prove this is absolutely the case, but if it is, your solution is vulnerable to:

1. Changes in the base image.  Docker tags do not imply immutability.  For instance, the official python image with tag 3.6 was updated 2 days ago.  

2. Never being able to update pip to a desired version without first modifying or raising an issue related to the base image.

The first problem can lead to another automagical failure of pipelines without warning.  I would consider this critical.  I don't anticipate docker image maintainers to prioritize bitbucket pipelines, so raising an issue will probably yield no results.


I consider both of our solutions workarounds.  But, I do not see a clean solution being provided by Bitbucket/moby, Docker, or pip anytime soon.  At least in the --target solution, minimal changes to the Dockerfile can provide stronger confidence that the pipeline is resilient to uncontrollable external factors. (Again assuming my hypothesis is correct)

I would prefer that multistage builds in the same Dockerfile use the same user namespace. This change broke my multistage build, which should have implicit trust within the one Dockerfile.

Like etvviral likes this

Hey guys,

I am quite new to the docker world, any help or pointers to resolve this would be appreciated. Its been few days I am trying to find a workaround with no luck yet.

I had a docker pipeline already setup which is used for testing the code after each git push and then if its successful, the deploy pipeline begins and runs. The pipelines are currently failing at testing after bitbucket disabled the --network=host and I am not sure what should be a solution in my case.

I am getting this error:

docker run --name=myapache_container -p 443:443 -p 80:80 -d -v "`pwd`:/var/www/mycodepackage" --add-host=lti-api.mydomain.test: --add-host=mysql: --add-host=mailhog: --network=host $DOCKER_HUB_USERNAME/mycodepackage:latest
docker: Error response from daemon: cannot share the host's network namespace when user namespaces are enabled.


So to counter this, if i remove --network=host parameter in my docker run command it fails with permission issues in the conatiner when i do chown in next steps in pipeline which wasn't the case happening before:

docker exec myapache_container chown -Rv www-data:www-data /var/www/mycodepackage/storage failed to change ownership of '/var/www/mycodepackage/storage/logs/.gitignore' from nobody:nogroup to www-data:www-data

Looking forward to your responses.

Thank you.

Answering Myself here for anyone looking solution for a similar issue [Got support from an Atlassian support team Ninja] :

Based on our observation, the issue that you're facing is due to how usersns feature work in docker.
If I understand what you're trying to achieve, you're trying to modify the ownership of the directory to "www-data:www-data".

The reason why you're having an issue changing the ownership as the volume which mounted to the container is still owned by the root directory.
Since the container is mapped to a non-root user, it's expected that you're having the error while trying to modify the directory's ownership.

What we can suggest is to run "chown -R 165536:165536 $BITBUCKET_CLONE_DIR" to map the directory with the docker accessible's userid.
This should allow you to make any changes to the volume which are mounted to your container.

Please run the command before the "docker run" command

Hope this helps.

Same issue here. Goes away setting Python image to 3.6 rather than 3.7. 
However, our project does not support Python 3.6.
Our pip version is pinned: pip==19.0.3.

I am still confused by all this. For me, on my local dev environment, I am happily using

docker run --rm --network=host curlimages/curl:latest http://localhost:5001/pingtest

 This is for a service in my container to interact with a service in the host. Since this is not supported in the CI Pipeline, I am trying the alternative suggested (using --add-host)

docker run --rm --add-host host.docker.internal:$BITBUCKET_DOCKER_HOST_INTERNAL curlimages/curl:latest http://host.docker.internal:5001/pingtest

However, this fails miserably with 'Connection refused'

Question: How can I receive the pingtest result in my CI build?


Log in or Sign up to comment
AUG Leaders

Atlassian Community Events