Showing results for 
Search instead for 
Did you mean: 
Sign up Log in

Earn badges and make progress

You're on your way to the next level! Join the Kudos program to earn points and save your progress.

Deleted user Avatar
Deleted user

Level 1: Seed

25 / 150 points

Next: Root


1 badge earned


Participate in fun challenges

Challenges come and go, but your rewards stay with you. Do more to earn more!


Gift kudos to your peers

What goes around comes around! Share the love by gifting kudos to your peers.


Rise up in the ranks

Keep earning points to reach the top of the leaderboard. It resets every quarter so you always have a chance!


Using Docker in Docker in a pipe

I'm trying to create a pipe for customers that builds software for a niche platform, but I'm running into an issue.

One of the inputs for the pipe is the version of the software they're using. That translates to a Docker image tag that we have hosted on GitHub Container Registry ( So, at runtime, I need the pipe to read the version that the user has requested in the inputs, pull down a Docker image specific to that version, and then run it.

The problem I'm running into is that it seems that BitBucket doesn't allow Docker-in-Docker, so my pipe is unable to pull down or run the image. I've tried mounting /var/run/docker.sock when running my unit tests, I've tried running it as privileged, I've tried the DOCKER_BUILDKIT environment variable (both 1 and 0) and I get errors such as:

"Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"

"Error response from daemon: authorization denied by plugin pipelines: --privileged=true is not allowed."

"dial unix /var/run/docker.sock: connect: permission denied"

Has anyone built a custom pipe for BitBucket Pipelines that dynamically pulls down Docker images based on inputs to the pipe? What am I missing here?

2 answers

0 votes
Igor Stoyanov
Atlassian Team
Atlassian Team members are employees working across the company in a wide variety of roles.
Oct 19, 2023 • edited

@Paul Bromwell Jr_  hi. Maybe some additional info. The variable BITBUCKET_DOCKER_HOST_INTERNAL is available by default (also , i think DOCKER_HOST too). Below is a shell example how pipe docker container started to run with add-host parameters:
internal host example in pipeline -> --add-host="host.docker.internal:$BITBUCKET_DOCKER_HOST_INTERNAL"
docker-host-env example in pipeline -> --env=DOCKER_HOST="tcp://host.docker.internal:2375"

If you are using python then try this inside your pipe:

# first install docker via pip
import os

import docker

docker_client = docker.from_env()
"DOCKER_HOST": "tcp://host.docker.internal:2375" # it should be available by default, but you could explicitly set this here
extra_hosts={"host.docker.internal": os.getenv('BITBUCKET_DOCKER_HOST_INTERNAL')}

If you use bash then try this inside your pipe:

docker run ... --env=DOCKER_HOST="tcp://host.docker.internal:2375" \
--add-host="host.docker.internal:$BITBUCKET_DOCKER_HOST_INTERNAL" \ ... 

Regards, Igor

0 votes
Patrik S
Atlassian Team
Atlassian Team members are employees working across the company in a wide variety of roles.
Oct 17, 2023

Hello @Paul Bromwell Jr_ and welcome to the Community!

The docker daemon should indeed be accessible from the pipe's container, however the pipelines docker daemon does not use the UNIX socket /var/run/docker.sock to connect with the daemon and instead uses the TCP protocol on port 2375. You can confirm that by checking the start command logs of any pipe (pipes are essentially a docker container) : 

pipe: <pipe image>
+ docker container run \
   --env=DOCKER_HOST="tcp://host.docker.internal:2375" \

This means the docker daemon should be available in the path tcp://host.docker.internal:2375

With that in mind, if you would like to execute docker commands in your pipe, you need to either :

  • Leave the DOCKER_HOST env variable with the value tcp://host.docker.internal:2375 as docker cli will refer to this variable to define to which daemon socket to connect.
  • Provide the path to daemon in the docker command itself with the docker cli -H flag:
    docker -H tcp://host.docker.internal:2375 pull ubuntu:latest

Hope that helps! Let me know in case you have any questions.

Thank you, @Paul Bromwell Jr_ !

Patrik S

Hey Patrik!

Thanks so much for the info! Unfortunately, it doesn't seem like that's working. I added this line to the top of my Dockerfile that's being used to generate the pipe:

ENV DOCKER_HOST="tcp://host.docker.internal:2375"

And I made sure all of my subprocess.Popen calls contained:


So all environment variables provided to the Python code should be passed to Docker CLI, and I'm getting this error:

error during connect: Post "http://host.docker.internal:2375/v1.24/images/create?fromImage=<REDACTED>&tag=<REDACTED>": dial tcp: lookup host.docker.internal on no such host

Any ideas on that one?

Patrik S
Atlassian Team
Atlassian Team members are employees working across the company in a wide variety of roles.
Oct 19, 2023

Hello @Paul Bromwell Jr_ ,

It seems like additionally to 

"DOCKER_HOST": "tcp://host.docker.internal:2375"

you should also provide:

 extra_hosts={"host.docker.internal": os.getenv('BITBUCKET_DOCKER_HOST_INTERNAL')}

as in the following example of one of the Atlassian-developed pipes: 

and referenced on docker-py documentation : 

Thank you!

Patrik S

Hey Patrik!

No dice, I'm afraid. I still get this error even when adding to the /etc/hosts file (which is what extra_hosts is doing):

dial tcp: lookup host.docker.internal on no such host

I did notice that if I have my Docker image run a shell script and run printenv (omitting the values I set), this is what it comes up with:


So I don't think that BITBUCKET_DOCKER_HOST_INTERNAL environment variable is getting passed to the containers.

I'm just going to rework the code to not have to use Docker-in-Docker.

Thanks anyway!


Suggest an answer

Log in or Sign up to answer
AUG Leaders

Atlassian Community Events