Create
cancel
Showing results for 
Search instead for 
Did you mean: 
Sign up Log in
Celebration

Earn badges and make progress

You're on your way to the next level! Join the Kudos program to earn points and save your progress.

Deleted user Avatar
Deleted user

Level 1: Seed

25 / 150 points

Next: Root

Avatar

1 badge earned

Collect

Participate in fun challenges

Challenges come and go, but your rewards stay with you. Do more to earn more!

Challenges
Coins

Gift kudos to your peers

What goes around comes around! Share the love by gifting kudos to your peers.

Recognition
Ribbon

Rise up in the ranks

Keep earning points to reach the top of the leaderboard. It resets every quarter so you always have a chance!

Leaderboard

Come for the products,
stay for the community

The Atlassian Community can help you and your team get more value out of Atlassian products and practices.

Atlassian Community about banner
4,456,966
Community Members
 
Community Events
176
Community Groups

Pipeline Runner: Communicate with Docker on host

Edited

We deploy from BitBucket Pipelines into a Docker Swarm Mode cluster. This requires setting up a remote SSH context to one of the cluster manager nodes, and executing the docker commands in that context. Essentially the following:

# Preserve the docker host, then unset it so it doesn't interfere 
# with the context
export PREVIOUS_DOCKER_HOST=$DOCKER_HOST
unset DOCKER_HOST

# Create and use the remote context
docker context create remote --docker "host=ssh://$DEPLOYMENT_USER@$DEPLOYMENT_HOST"
docker context use remote

# Log into the registry
echo $DOCKER_HUB_PASSWORD | docker login --username $DOCKER_HUB_USER --password-stdin

# Deploy the service
docker stack deploy \
  --with-registry-auth \
  --prune \
  --compose-file docker-compose.production.yaml \
$BITBUCKET_REPO_SLUG


# Restore the pipeline docker host, in case we need it later on
export DOCKER_HOST=PREVIOUS_DOCKER_HOST

This has several drawbacks:

  • Without jumping through some annoying hoops, $DEPLOYMENT_HOST must be a single, static host that provides access to the cluster. This is inflexible.
  • What's more, it needs to be a manager node. So every time we reshape the cluster, we'll need to update BitBucket configuration, too.
  • The node needs to expose SSH, and provide access to a pipeline user. This means copy-pasting public keys to that node, manually, for every service, and adds another potential security breach.
  • The whole context switching means lots of boilerplate for all pipelines.

A better way

Instead, we'd like to have a self-hosted runner, running in the cluster, so the deployment step can run within the cluster and deploy services to the local Docker instance directly.
This solves every drawback:

  • As runners connect to BitBucket by themselves, the static host address is gone. 
  • We can have several runners which can run on several manager nodes and load being spread evenly among them. If a runner moves to another node, it takes build steps with it.
  • SSH does not need to be exposed externally, as communication happens via the runner web socket.
  • Docker context would be replaced by simply setting the Docker Host to the local Docker on the cluster node.

 

The problem

What sounds so neat in theory doesn't work, because I can't get the step executed inside the runner to talk to the Docker daemon on my cluster node. I would need to mount the Docker socket from the host into the step container, or resolve the host IP to communicate with the TCP socket.

Is there a way to talk to the docker daemon on the runner host from within a build step?

 

Edit: For reference, here's the output of docker info inside the build step (emphasis mine):

+ DOCKER_HOST=$BITBUCKET_DOCKER_HOST_INTERNAL docker info
Client:
Context: default
Debug Mode: false
Server:
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 0
Server Version: 20.10.15
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
userxattr: false
Logging Driver: json-file
Cgroup Driver: cgroupfs
Cgroup Version: 1
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Authorization: pipelines
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 212e8b6fa2f44b9c21b2798135fc6fb7c53efc16
runc version: v1.1.1-0-g52de29d7
init version: de40ad0
Security Options:
apparmor
seccomp
Profile: default
userns
Kernel Version: 5.4.0-105-generic
Operating System: Alpine Linux v3.15 (containerized)
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 3.746GiB
Name: 086d6e21a117
ID: WVBT:EBC5:HCCO:SKNB:P664:IUDG:AU2M:XDQE:BXR2:22E4:AXMH:HT3G
Docker Root Dir: /var/lib/docker/165536.165536
Debug Mode: false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Registry Mirrors:
http://localhost:5000/
Live Restore Enabled: false
Product License: Community Engine

WARNING: API is accessible on http://0.0.0.0:2375 without encryption.
Access to the remote API is equivalent to root access on the host. Refer
to the 'Docker daemon attack surface' section in the documentation for
more information: https://docs.docker.com/go/attack-surface/
WARNING: No swap limit support 

0 answers

Suggest an answer

Log in or Sign up to answer
DEPLOYMENT TYPE
CLOUD
TAGS

Atlassian Community Events