Hi,
We're trying to run our bitbucket pipeline in a linux docker self hosted runner.
One step builds a docker image and tags it.
However we're getting a weird issue where the image won't build as it's trying to fetch from localhost:5000 instead of docker.io
When doing a docker info for some reason there's a registry mirror?
Registry Mirrors:
http://localhost:5000/
We don't have repository mirrors setup on our linux host which is running the runner. Docker can pull images just fine using `docker pull`
sha256:c67816a2442a3a7424f92eeee71dbde11a3dbaa1eb14a88aeb59a2917d9b3c39
transferring dockerfile: 391B done
DONE 0.0s
[internal] load metadata for docker.io/library/eclipse-temurin:21-alpine
sha256:acab80a4d00debad41eec0ff713dca2abaf410473edfa3322be8e7f25132c5fc
ERROR: failed to do request: Head "https://localhost:5000/v2/library/eclipse-temurin/manifests/21-alpine?ns=docker.io": dial tcp [::1]:5000: connect: connection refused
------
> [internal] load metadata for docker.io/library/eclipse-temurin:21-alpine:
------
eclipse-temurin:21-alpine: failed to do request: Head "https://localhost:5000/v2/library/eclipse-temurin/manifests/21-alpine?ns=docker.io": dial tcp [::1]:5000: connect: connection refused
Any help or ideas?
Hello @Michal Czuper ,
and welcome to the Community!
The default docker daemon used by pipelines comes with a localhost registry mirror configured. However, the default registry is still docker.io (Docker Hub) as we can confirm from the output of docker system info :
Registry:
https://index.docker.io/v1/Registry Mirrors:
http://localhost:5000/
So docker should still try to pull the images from docker.io.
I tried reproducing the same issue on my end, but even with a registry mirror, docker falls out to docker.io when the mirror does not exist in that address. In this case, I would like to ask you a few more questions to investigate it further:
definitions:
services:
docker:
type: docker
image: docker:dind
Let us know in case you have any questions.
Thank you, @Michal Czuper !
Patrik S
This turned out to be a proxy issue!
After a lot of testing and using "export HTTPS_PROXY" (and all it's other variations) I managed to get it to work.
The docker message is a slight red herring. The only reason it tries to go to localhost is because it fails to go to docker.io
What still confuses me that "docker info" correctly specified proxies and docker pull (from docker.io) worked just fine. What's even funnies is that if I did a docker pull of the image that was failing (in this case: eclipse-temurin:21-alpine) and then did the docker build. Docker was happy to use the previously pulled image and continue the build. (This is all without using the pipelines docker caching mechanism).
After passing the proxies to docker run and configuring the host config.json and daemon.json it still wouldn't work. The only way was to use a custom docker service and specify the proxy environment variables. Which again is confusing since docker info showed the proxies applied with only the config.json being setup.
That article specifies a limitation:
- HTTP_PROXY and HTTPS_PROXY variables passed to the runner on start up are not passed through to the build container, service containers, or pipes. You can configure variables, such as repository level variables, if required.
My question(s) are:
- Is a pipeline step which uses the "service" tag inside it a "service container"?
- What is a "build container"?
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Hey @Michal Czuper ,
happy to hear you were able to narrow down the issue to a proxy configuration.
As for your latest questions, when you trigger a build in pipelines, each step in your pipeline is executed in its own container. When a step starts, it spins up a new container inside which the step's script is executed. This container is called the build container.
It's also possible to define services, such as the docker service, in the definition of your step. The services run in a separate container that shares a network adapter with the build container. This means you can communicate with the service containers from the build container. So, for example, you may have a step that executes unit tests, and the tests involve connecting to a database and executing some operations. You can include a service in the step (which is essentially a new docker container) that spins up a MySQL instance, and you will have access to it from the build container.
You can define multiple services in a step. Each service runs in its own container, sharing the network with the build container.
For more details about services and databases, the following documentation may also be of help:
I hope that information helps!
Thank you, @Michal Czuper !
Patrik S
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.