Create
cancel
Showing results for 
Search instead for 
Did you mean: 
Sign up Log in

scp-deploy from a Self Hosted runner to Internal Server

Bao Le May 4, 2022

Hi,

i was trying to get a pretty Simple Pipeline going on a Self Hosted Runner. The idea was to simply copy the content of the repo into a dedicated Folder on a Server in the same network as the Runner but it is not accessible via the Internet.

The issue i am having is that it is not documented at all on how to add a internal host into the known_hosts file or even "who" initiates the connection. If it's the Runner, it is able to reach the Target Host on the network standpoint but the logs are only displaying "no route to host" which confuses me.

 

pipelines:

  custom:

    customPipelineWithRunnerStep:

      - step:

          deployment: Test

          name: Step 1

          services:

            - docker

          runs-on:

            - 'self.hosted'

          script:

            - echo "$USER;$SERVER";

            - pipe: atlassian/scp-deploy:1.2.1

              variables:

                USER: $USER

                SERVER: $SERVER

                REMOTE_PATH: '/var/www/'

                LOCAL_PATH: '${BITBUCKET_CLONE_DIR}/*'

                DEBUG: 'true'

          caches:

            - docker
pipe: atlassian/scp-deploy:1.2.1
+ docker container run \
--volume=/opt/atlassian/pipelines/agent/build:/opt/atlassian/pipelines/agent/build \
--volume=/usr/local/bin/docker:/usr/local/bin/docker:ro \
--volume=/opt/atlassian/pipelines/agent/ssh:/opt/atlassian/pipelines/agent/ssh:ro \
--volume=/opt/atlassian/pipelines/agent/build/.bitbucket/pipelines/generated/pipeline/pipes:/opt/atlassian/pipelines/agent/build/.bitbucket/pipelines/generated/pipeline/pipes \
--volume=/opt/atlassian/pipelines/agent/build/.bitbucket/pipelines/generated/pipeline/pipes/atlassian/scp-deploy:/opt/atlassian/pipelines/agent/build/.bitbucket/pipelines/generated/pipeline/pipes/atlassian/scp-deploy \
--workdir=$(pwd) \
--label=org.bitbucket.pipelines.system=true \
--env=BITBUCKET_STEP_TRIGGERER_UUID="$BITBUCKET_STEP_TRIGGERER_UUID" \
--env=BITBUCKET_REPO_FULL_NAME="$BITBUCKET_REPO_FULL_NAME" \
--env=BITBUCKET_GIT_HTTP_ORIGIN="$BITBUCKET_GIT_HTTP_ORIGIN" \
--env=BITBUCKET_PROJECT_UUID="$BITBUCKET_PROJECT_UUID" \
--env=BITBUCKET_REPO_IS_PRIVATE="$BITBUCKET_REPO_IS_PRIVATE" \
--env=BITBUCKET_WORKSPACE="$BITBUCKET_WORKSPACE" \
--env=BITBUCKET_DEPLOYMENT_ENVIRONMENT_UUID="$BITBUCKET_DEPLOYMENT_ENVIRONMENT_UUID" \
--env=BITBUCKET_REPO_OWNER_UUID="$BITBUCKET_REPO_OWNER_UUID" \
--env=BITBUCKET_BRANCH="$BITBUCKET_BRANCH" \
--env=BITBUCKET_REPO_UUID="$BITBUCKET_REPO_UUID" \
--env=BITBUCKET_PROJECT_KEY="$BITBUCKET_PROJECT_KEY" \
--env=BITBUCKET_DEPLOYMENT_ENVIRONMENT="$BITBUCKET_DEPLOYMENT_ENVIRONMENT" \
--env=BITBUCKET_REPO_SLUG="$BITBUCKET_REPO_SLUG" \
--env=CI="$CI" \
--env=BITBUCKET_REPO_OWNER="$BITBUCKET_REPO_OWNER" \
--env=BITBUCKET_STEP_RUN_NUMBER="$BITBUCKET_STEP_RUN_NUMBER" \
--env=BITBUCKET_BUILD_NUMBER="$BITBUCKET_BUILD_NUMBER" \
--env=BITBUCKET_GIT_SSH_ORIGIN="$BITBUCKET_GIT_SSH_ORIGIN" \
--env=BITBUCKET_PIPELINE_UUID="$BITBUCKET_PIPELINE_UUID" \
--env=BITBUCKET_COMMIT="$BITBUCKET_COMMIT" \
--env=PIPELINES_JWT_TOKEN="$PIPELINES_JWT_TOKEN" \
--env=BITBUCKET_STEP_UUID="$BITBUCKET_STEP_UUID" \
--env=BITBUCKET_CLONE_DIR="$BITBUCKET_CLONE_DIR" \
--env=BITBUCKET_DOCKER_HOST_INTERNAL="$BITBUCKET_DOCKER_HOST_INTERNAL" \
--env=DOCKER_HOST="tcp://host.docker.internal:2375" \
--env=BITBUCKET_PIPE_SHARED_STORAGE_DIR="/opt/atlassian/pipelines/agent/build/.bitbucket/pipelines/generated/pipeline/pipes" \
--env=BITBUCKET_PIPE_STORAGE_DIR="/opt/atlassian/pipelines/agent/build/.bitbucket/pipelines/generated/pipeline/pipes/atlassian/scp-deploy" \
--env=LOCAL_PATH="${BITBUCKET_CLONE_DIR}/*" \
--env=REMOTE_PATH="/var/www/" \
--env=SERVER="$SERVER" \
--env=USER="$USER" \
--add-host="host.docker.internal:$BITBUCKET_DOCKER_HOST_INTERNAL" \
bitbucketpipelines/scp-deploy:1.2.1
Unable to find image 'bitbucketpipelines/scp-deploy:1.2.1' locally
1.2.1: Pulling from bitbucketpipelines/scp-deploy
bb7d5a84853b: Pulling fs layer
f02b617c6a8c: Pulling fs layer
d32e17419b7e: Pulling fs layer
c9d2d81226a4: Pulling fs layer
3c24ae8b6604: Pulling fs layer
8a4322d1621d: Pulling fs layer
a03ef301ddd7: Pulling fs layer
a4c591fc96f3: Pulling fs layer
c2fde97fe1fb: Pulling fs layer
02d081902850: Pulling fs layer
4e6a5cdfcfa7: Pulling fs layer
bad96da9f98d: Pulling fs layer
1a9bd9b01ebe: Pulling fs layer
8d8633d00c54: Pulling fs layer
c9d2d81226a4: Waiting
3c24ae8b6604: Waiting
8a4322d1621d: Waiting
a03ef301ddd7: Waiting
a4c591fc96f3: Waiting
c2fde97fe1fb: Waiting
02d081902850: Waiting
4e6a5cdfcfa7: Waiting
bad96da9f98d: Waiting
1a9bd9b01ebe: Waiting
8d8633d00c54: Waiting
f02b617c6a8c: Verifying Checksum
f02b617c6a8c: Download complete
d32e17419b7e: Verifying Checksum
d32e17419b7e: Download complete
bb7d5a84853b: Verifying Checksum
bb7d5a84853b: Download complete
bb7d5a84853b: Pull complete
f02b617c6a8c: Pull complete
d32e17419b7e: Pull complete
8a4322d1621d: Verifying Checksum
8a4322d1621d: Download complete
c9d2d81226a4: Verifying Checksum
c9d2d81226a4: Download complete
3c24ae8b6604: Verifying Checksum
3c24ae8b6604: Download complete
a4c591fc96f3: Verifying Checksum
a4c591fc96f3: Download complete
c9d2d81226a4: Pull complete
a03ef301ddd7: Verifying Checksum
a03ef301ddd7: Download complete
02d081902850: Verifying Checksum
02d081902850: Download complete
4e6a5cdfcfa7: Verifying Checksum
4e6a5cdfcfa7: Download complete
c2fde97fe1fb: Verifying Checksum
c2fde97fe1fb: Download complete
3c24ae8b6604: Pull complete
8a4322d1621d: Pull complete
a03ef301ddd7: Pull complete
a4c591fc96f3: Pull complete
c2fde97fe1fb: Pull complete
02d081902850: Pull complete
4e6a5cdfcfa7: Pull complete
bad96da9f98d: Verifying Checksum
bad96da9f98d: Download complete
1a9bd9b01ebe: Verifying Checksum
1a9bd9b01ebe: Download complete
bad96da9f98d: Pull complete
8d8633d00c54: Verifying Checksum
8d8633d00c54: Download complete
1a9bd9b01ebe: Pull complete
8d8633d00c54: Pull complete
Digest: sha256:b9111f61b5824ca7ed1cb63689a6da55ca6d6e8985eb778c36a5dfc2ffe776a8
Status: Downloaded newer image for bitbucketpipelines/scp-deploy:1.2.1
INFO: Configuring ssh with default ssh key.
INFO: Adding known hosts...
INFO: Appending to ssh config file private key path
INFO: Applied file permissions to ssh directory.
ssh: connect to host 172.18.1.28 port 22: No route to host
lost connection
✖ Deployment failed.

Could someone support me here?

1 answer

1 accepted

0 votes
Answer accepted
Patrik S
Atlassian Team
Atlassian Team members are employees working across the company in a wide variety of roles.
May 5, 2022

Hello @Bao Le ,

Welcome to Atlassian Community!

The error message being shown "connect to host 172.18.1.28 port 22: No route to host" usually happens when the destination IP has a firewall blocking the SSH traffic, such as iptables.

In order to check if the destination IP is listening to port 22, could you please execute the command below in the host you are running the runner and share the log with us?

nc -vz 172.18.1.28 22

Also, if you try to use the same scp command outside of the runner, executing it directly in the host machine and to the same destination IP, is the command successful ?

Let me know if you have any question.

Thank you, @Bao Le .

Kind regards,

Patrik S

Bao Le May 5, 2022

Hi @Patrik S ,

thank you for your response.
Basically i have already tried to directly ssh to the target host which succeeds.

Running your command results in the following


nc -vz 172.18.1.28 22
172.18.1.28: inverse host lookup failed: Unknown host
(UNKNOWN) [172.18.1.28] 22 (ssh) open

 When directly doing ssh 172.18.1.28 from the runner i get this

ssh 172.18.1.28
The authenticity of host '172.18.1.28 (172.18.1.28)' can't be established.
ED25519 key fingerprint is SHA256:is3d7QUtkEBf3ATPgGmhRy4a1zsTUNZzGYEaOZE3Nes.
No matching host key fingerprint found in DNS.
Are you sure you want to continue connecting (yes/no/[fingerprint])?

this is expected because i haven't actually added the target host to the known hosts list so far.

Going scp is also working fine.

How can i proceed here?

Patrik S
Atlassian Team
Atlassian Team members are employees working across the company in a wide variety of roles.
May 5, 2022

Hello @Bao Le ,

Thank you for getting back with the results.

The nc command shows that indeed the destination server is listening to port 22.

I would like to ask you trying to connect over ssh with verbose logging and doing scp without using the pipe, by including the below commands to your bitbucket-pipelines.yml file before the pipe:

ssh -Tvvv $SERVER -o StrictHostKeyChecking=no
scp -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -r -v $BITBUCKET_CLONE_DIR $USER@$SERVER:/var/www/

The above commands will temporarily disable the host key checking, so you are not prompted to confirm the thumbprint.

Once you add the above commands and run your pipeline again, please share us the log output, making sure to sanitize any sensitive information.

Thank you, @Bao Le .

Kind regards,

Patrik S

Bao Le May 5, 2022

Hi @Patrik S ,

 

that is my forth attempt in another Browser because the editor seems to find some links in my text... even tho there is none.

Anyways, here is the Result.

ssh -Tvvv $SERVER -o StrictHostKeyChecking=no

+ ssh -Tvvv $SERVER -o StrictHostKeyChecking=no
OpenSSH_6.6.1, OpenSSL 1.0.1f 6 Jan 2014
debug1: Reading configuration data /root/.ssh/config
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 19: Applying options for *
debug2: ssh_connect: needpriv 0
debug1: Connecting to 172.18.1.28 [172.18.1.28] port 22.
debug1: connect to address 172.18.1.28 port 22: No route to host
ssh: connect to host 172.18.1.28 port 22: No route to host

And here the Pipelines file

01.PNG

 

PS: This editor is the worst i have seen since a looong time.


BR

Bao Le May 9, 2022

Hi @Patrik S 

do you have any other suggestions i could do to analyse this issue?

 

BR

Patrik S
Atlassian Team
Atlassian Team members are employees working across the company in a wide variety of roles.
May 9, 2022

Hello @Bao Le ,

I was able to successfully test connecting to an SSH local endpoint using runners in my local environment, which means that runners are indeed capable of talking with the local network.

That said, the issue here seems more to be on how your docker network or firewall is configured in the host and destination machine, which seems to be blocking the outside traffic from the runner container.

With that in mind, I would like to understand the following :

  • Could you try to include ping and traceroute commands to the local IP in your build to check if at least ICMP is passing ?
    ping 172.18.1.28 -c 3
    traceroute 172.18.1.28
    If traceroute is not available in the docker image you are currently using, you can install it in the build container by adding the following line to your build script before the traceroute command :
    apt-get update && apt-get install traceroute -y
  • The destination address you are trying to ssh into is a docker container as well ?
  • Can you try starting the runner with --net host option and test running the build again ?
    docker container run -it --net host [ ... rest of runner's params ...] docker-public.packages.atlassian.com/sox/atlassian/bitbucket-pipelines-runner:1
  • After starting the runner in your host machine, can you share the docker inspect of the runner's container ?
    docker container ls    #list the active containers to find the ID of the runner container.
    docker inspect <ID of the runner's container>

Also, I found this Stackoverflow thread with an error similar to what you are currently receiving. I would recommend taking a look at the suggested solutions in that thread to understand if any of them apply to your scenario and helps on fixing the issue.

Thank you, @Bao Le .

Kind regards,

Patrik S

Bao Le May 9, 2022

Hi @Patrik S ,

thank you a lot for your efforts already put into this Topic, i really appreciate it.

Ping results:

ping 172.18.1.28 -c 3
+ ping 172.18.1.28 -c 3
PING 172.18.1.28 (172.18.1.28) 56(84) bytes of data.
From 172.18.0.1 icmp_seq=1 Destination Host Unreachable
From 172.18.0.1 icmp_seq=2 Destination Host Unreachable
From 172.18.0.1 icmp_seq=3 Destination Host Unreachable
--- 172.18.1.28 ping statistics ---
3 packets transmitted, 0 received, +3 errors, 100% packet loss, time 2048ms
pipe 3

Traceroute results:

traceroute $SERVER
+ traceroute $SERVER
traceroute to 172.18.1.28 (172.18.1.28), 30 hops max, 60 byte packets
1 172.18.0.1 (172.18.0.1) 3051.951 ms !H 3051.909 ms !H 3051.890 ms !H

 

I think it's odd why it chooses the 172.18.0.1 IP for itself (seemingly). Where exactly does that come from? That also explains the !H where the host can't be reached.


The Target Hosts is not a Docker container, in fact, it is the Host of the Runner Docker container. Meaning: The Docker Runner is trying to connect to it's own host. I have already tried 127.0.0.1 which only results in connection refused.


Status: Downloaded newer image for bitbucketpipelines/scp-deploy:1.2.1
INFO: Configuring ssh with default ssh key.
INFO: Adding known hosts...
INFO: Appending to ssh config file private key path
INFO: Applied file permissions to ssh directory.
ssh: connect to host 127.0.0.1 port 22: Connection refused
lost connection
 

 

Here you find the docker inspect


[
{
"Id": "f700af392f60be262a46d6a87a58b1e83571c01ee546b1f4719da4ba0481eb64",
"Created": "2022-05-09T00:05:48.10808791Z",
"Path": "/bin/sh",
"Args": [
"-c",
"-x",
"./entrypoint.sh"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 94229,
"ExitCode": 0,
"Error": "",
"StartedAt": "2022-05-09T00:05:48.651389748Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:77088f9c20f7d196f167e9b7567c5840a0836bdeed93f5371aaea157e1c20a2f",
"ResolvConfPath": "/var/lib/docker/containers/f700af392f60be262a46d6a87a58b1e83571c01ee546b1f4719da4ba0481eb64/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/f700af392f60be262a46d6a87a58b1e83571c01ee546b1f4719da4ba0481eb64/hostname",
"HostsPath": "/var/lib/docker/containers/f700af392f60be262a46d6a87a58b1e83571c01ee546b1f4719da4ba0481eb64/hosts",
"LogPath": "/var/lib/docker/containers/f700af392f60be262a46d6a87a58b1e83571c01ee546b1f4719da4ba0481eb64/f700af392f60be262a46d6a87a58b1e83571c01ee546b1f4719da4ba0481eb64-json.log",
"Name": "/runner-040fb2c2-8da2-53bf-a8bb-e5b0c4684230",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "docker-default",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/tmp:/tmp",
"/var/run/docker.sock:/var/run/docker.sock",
"/var/lib/docker/containers:/var/lib/docker/containers:ro"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "default",
"PortBindings": {},
"RestartPolicy": {
"Name": "always",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "private",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": false,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": null,
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"ConsoleSize": [
0,
0
],
"Isolation": "",
"CpuShares": 0,
"Memory": 0,
"NanoCpus": 0,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": null,
"BlkioDeviceWriteBps": null,
"BlkioDeviceReadIOps": null,
"BlkioDeviceWriteIOps": null,
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"KernelMemory": 0,
"KernelMemoryTCP": 0,
"MemoryReservation": 0,
"MemorySwap": 0,
"MemorySwappiness": null,
"OomKillDisable": null,
"PidsLimit": null,
"Ulimits": null,
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": [
"/proc/asound",
"/proc/acpi",
"/proc/kcore",
"/proc/keys",
"/proc/latency_stats",
"/proc/timer_list",
"/proc/timer_stats",
"/proc/sched_debug",
"/proc/scsi",
"/sys/firmware"
],
"ReadonlyPaths": [
"/proc/bus",
"/proc/fs",
"/proc/irq",
"/proc/sys",
"/proc/sysrq-trigger"
]
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/71e2e833cba6440eef5863d7682658430a1892d6fae1b98d01b9068778e6d17f-init/diff:/var/lib/docker/overlay2/7afb535c0a0be01ff952577832247aefb9e8a7eeefc1b4e8ebc4f605e0111ab1/diff:/var/lib/dock
er/overlay2/6019bafb28c5eec364ecfb8347aab05da31f41d988b463663ef2e7b3fc26099d/diff:/var/lib/docker/overlay2/8af1b7cdb473dd8094c813002719c78c06626c848136472d3d667f7ac97feb4f/diff:/var/lib/docker/overlay2/b36c53129b37b1e23209f3ab83a1eadd461
14a4f30a923be9117c94d360fc31b/diff:/var/lib/docker/overlay2/652c567bfb352fc4b9600c72682ab1bf90c962db23bcf4c50982de37a5438ab9/diff:/var/lib/docker/overlay2/052ba41489235995862f2cd153ad9ca2255fc4fd4ef2478604c5a5df71f41b7a/diff:/var/lib/doc
ker/overlay2/ed094570502cb3ed8e9394e32db2863ad7785fae64d394d0ac66f9609880fb77/diff:/var/lib/docker/overlay2/b1b950781f38186d5d5f1d998a969c4b9292cd129e29e4f83d6df749fa78f019/diff",
"MergedDir": "/var/lib/docker/overlay2/71e2e833cba6440eef5863d7682658430a1892d6fae1b98d01b9068778e6d17f/merged",
"UpperDir": "/var/lib/docker/overlay2/71e2e833cba6440eef5863d7682658430a1892d6fae1b98d01b9068778e6d17f/diff",
"WorkDir": "/var/lib/docker/overlay2/71e2e833cba6440eef5863d7682658430a1892d6fae1b98d01b9068778e6d17f/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/var/run/docker.sock",
"Destination": "/var/run/docker.sock",
"Mode": "",
"RW": true,
"Propagation": "rprivate"
},
{
"Type": "bind",
"Source": "/var/lib/docker/containers",
"Destination": "/var/lib/docker/containers",
"Mode": "ro",
"RW": false,
"Propagation": "rslave"
},
{
"Type": "bind",
"Source": "/tmp",
"Destination": "/tmp",
"Mode": "",
"RW": true,
"Propagation": "rprivate"
}
],
"Config": {
"Hostname": "f700af392f60",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"Tty": true,
"OpenStdin": true,
"StdinOnce": false,
"Env": [
"ACCOUNT_UUID=<Hidden sensitive information>",
"REPOSITORY_UUID=<Hidden sensitive information>",
"RUNNER_UUID=<Hidden sensitive information>",
"RUNTIME_PREREQUISITES_ENABLED=true",
"OAUTH_CLIENT_ID=<Hidden sensitive information>",
"OAUTH_CLIENT_SECRET=<Hidden sensitive information>",
"WORKING_DIRECTORY=/tmp",
"PATH=/usr/local/openjdk-11/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"JAVA_HOME=/usr/local/openjdk-11",
"LANG=C.UTF-8",
"JAVA_VERSION=11.0.15",
"ENVIRONMENT=PRODUCTION",
"RUNTIME=linux-docker",
"DOCKER_URI=unix:///var/run/docker.sock",
"SCHEDULED_STATE_UPDATE_INITIAL_DELAY_SECONDS=0",
"SCHEDULED_STATE_UPDATE_PERIOD_SECONDS=30"
],
"Cmd": null,
"Image": "docker-public.packages.atlassian.com/sox/atlassian/bitbucket-pipelines-runner:1",
"Volumes": null,
"WorkingDir": "/opt/atlassian/pipelines/runner",
"Entrypoint": [
"/bin/sh",
"-c",
"-x",
"./entrypoint.sh"
],
"OnBuild": null,
"Labels": {}
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "0454fd958b3bb99d74e46b0a23a8ea609b100bd523bc574db913f6d42860c1d5",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {},
"SandboxKey": "/var/run/docker/netns/0454fd958b3b",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "78efe9f79b47918d04ff36773cc0da1f8e5925bf1ad26de756ecb6c59f864437",
"Gateway": "172.17.0.1",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "172.17.0.2",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"MacAddress": "02:42:ac:11:00:02",
"Networks": {
"bridge": {
"IPAMConfig": null,
"Links": null,
"Aliases": null,
"NetworkID": "a9aabd3a6294a13311335d79224f59915812a9a6d32883b242255614fa5e5679",
"EndpointID": "78efe9f79b47918d04ff36773cc0da1f8e5925bf1ad26de756ecb6c59f864437",
"Gateway": "172.17.0.1",
"IPAddress": "172.17.0.2",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:ac:11:00:02",
"DriverOpts": null
}
}
}
}
]

 Thanks a lot.

Patrik S
Atlassian Team
Atlassian Team members are employees working across the company in a wide variety of roles.
May 10, 2022

Hello @Bao Le ,

Thank you for providing the log outputs.

From the details you have shared I think what might be happening here is that your host's IP address is in the same range of your docker bridge network addresses, which is causing a conflict in the routing and thus your are not being able to access the IP of the host.

When you run a new container docker will by default attach this container to an isolated bridge network. This is a private network where containers can access each other, and the default IP range for this bridge network is 172.17.*.* (172.17.0.1/16).

What seems to be happening here is that your host's IP address (where you are executing the runner) is also in the same IP range 172.17.*.* as the docker bridge, so when you try to connect to the host's IP, the docker bridge thinks this is a container internal IP and is not going through the default gateway, causing the No route to host error.

In order to fix this, I would suggest changing the Default Bridge IP range(docker subnet) of your docker environment to a range that is not conflicting with the host's, for example 172.26.*.*

This can be done by editing the file daemon.json and adding the field "bip": "172.26.0.1/16". This file is in different locations depending on your operating system:

  •  Linux system : /etc/docker/daemon.json 
  •  Windows : C:\ProgramData\docker\config\daemon.json
  • MacOS : go to the whale in the taskbar > Preferences > Daemon > Advanced.

Source: https://docs.docker.com/config/daemon/#configure-the-docker-daemon

The daemon.json file would look like this after editing :

{
...

"bip": "172.26.0.1/16"
}

After including the bip attribute in the file, you will need to restart the docker service:

sudo systemctl restart docker

Reference: https://medium.com/codebrace/understanding-docker-networks-and-resolving-conflict-with-docker-subnet-ip-range-bfaad092a7ea

Then, you can try running your build again and check if the issue is fixed and if the runner is able to connect to the host's IP address.

Hope that helps! Let me know if you run into any issues.

Thank you @Bao Le !

Kind regards,

Patrik S

Bao Le May 10, 2022

Hi @Patrik S 

thank you a lot, that was it. Is there a way i can buy you a coffee? :)

BR

Like Patrik S likes this
Patrik S
Atlassian Team
Atlassian Team members are employees working across the company in a wide variety of roles.
May 10, 2022

Hello @Bao Le ,

Awesome! Happy to hear that it worked and glad I was able to help :)

Please feel free to reach out to our community whenever you need any help!

Kind regards,

Patrik S

Suggest an answer

Log in or Sign up to answer
TAGS
AUG Leaders

Atlassian Community Events