pipeline: build and push docker image with custom HTTPS root-ca-certificate?

Alexander Christian
Contributor
July 15, 2024

Hi there,

I have a hopefully not so uncommon issue, so that someone can easily point me to a working solution (googl'ing did not help so far...):

We have bitbucket repostory that needs to build and push an docker image to a docker registry.

As the docker registry we want to use is a company-local-LAN system, we configured a self-hosted runner in docker-style.

Pipeline yml looks like this:

image: atlassian/default-image:3

pipelines:

  custom: 
      dockerbuild:
        - step:
            name: "docker build"
            runs-on:
              - self.hosted
              - linux        
            script:
            # Prepare Docker Image with ZScaler certificates etc...
              - | 
                bash <( curl --header "Authorization: Bearer $PIPELINE_HELPER_BEARERTOKEN" -sL --url "https://api.bitbucket.org/2.0/repositories/mycompany/pipelinehelperscripts/src/HEAD/PrepareDockerImage" )
             
            # do docker stuff
              - docker login dockerreg.mycompany.com -u $ISE_DOCKER_USER -p $ISE_DOCKER_PASS
              - docker build -t dockerreg.mycompany.com/translationservice
              - docker push dockerreg.mycompany.com/translationservice
            services:
              - docker

 

For securit reasony, the company is running Z-Scaler service, which hooks into every HTTPS connection with its own root-ca-certificate.

For "normal linux commands" like wget or curl, we need to put the z-scaler root ca certificate into a /usr/local/share/ca-certificates/ and run "update-ca-certificates". That's what the bash/curl command in the pipeline is doing... 

But the docker commands totally ignore the certificates I added.
I also tried to apply the certificates to the docker-host system. 
To make it clear:
On both system (docker host and docker container) I can use the certificates with wget, curl, ... 

The docker-host system can successfully login to our registry without certificate issues. 

But only on docker container level (run by the pipeline runner) the applied root-ca-certificate of Z-Scaler is *NOT* recognized by any docker command.


I'm pretty sure that other people with self-signed certificates do face the same problem.
So, is there someone out there who can give me a hint?

br,

Alex

[update]
What I found out so far:
If I run "docker version" inside the pipeline, I see that docker is somehow not using the docker server from the docker-host. That would explain why my docker container does now know the root-ca like my docker-host.
I found this documentation: https://support.atlassian.com/bitbucket-cloud/docs/run-docker-commands-in-bitbucket-pipelines/#Using-an-external-Docker-daemon

But: I have no clue how I can tell my pipeline to use the docker daemon from my docker-host system... 

1 answer

0 votes
Syahrul
Atlassian Team
Atlassian Team members are employees working across the company in a wide variety of roles.
July 16, 2024

G'day, @Alexander Christian 

I believe we have similar questions on this before :Self Hosted Pipeline Runners with self signed certificate firewall inspection 

Basically what you need to do is build a new Docker image that is based on the image runners are using and modify the certificate store. This way, you can force the docker runner to use your self-signed certificate. 

Please check the community post where members of our team recommended a way to achieve this.

Regards
Syahrul

Alexander Christian
Contributor
July 17, 2024

Sorry, but this is not the same.
The linked issue you are talking about prepares a docker image with the certificate and installs it into java's keystore. 

I'm already able to tell my docker image to install the certitificate to linux commands like curl and wget, as well as java. This works also during runtime, so I don't need to prepare a certificate-preinstalled-docker-image.

The issue I have is with "docker in docker": If I enable the docker service in the pipeline script, another docker image/container/whatever is started that provides the docker binaries as well as a docker service. And this docker bibaries/service does not know about the certificate. It seems to be out of my control.

So, how do I fix the "own certificate issue" in combination with "docker in docker" (while building/pushing a docker image within the pipeline).

br,

Alex

Syahrul
Atlassian Team
Atlassian Team members are employees working across the company in a wide variety of roles.
July 17, 2024

Hey @Alexander Christian 

Thanks for the update.

I believe we utilize the default DIND image for the docker services. If that's the case, you can use a custom DIND image with your CA, for example:

definitions:
    services:
      docker: 
        image: <your docker dind image with ca>

Note that custom DIND image is not supported in Linux shell runner, but it shouldn't be an issue if you are using Linux docker.

Regards,
Syahrul

Alexander Christian
Contributor
July 18, 2024

Looks like an option. What I found out regarding the used DIND image:

Pipeline says on startup

Screenshot 2024-07-18 094942.png
So for DIND, an kind of custom DIND image is used. Woud be great to know what the dockerfile says so that I know how it is build. But I did not find the source to it. 

Would be great I you have some more details on it. 

In the meantime I'll try put my own DIND image together.

Alexander Christian
Contributor
July 18, 2024

One further question: Doesn't need DIND the "--priviledged" flag when started? See step 3: Docker Inside Docker. Docker has undoubtedly transformed the… | by Shivam kushwah | Medium

So, even if I create my own DIND image with my required CA, how can I tell the pipeline that this image needs to run in priviledged mode? Or is "priviledged" mode automatically applied when having the "service: - docker" added to yaml?

According to the pipeline documentation (Run Docker commands in Bitbucket Pipelines | Bitbucket Cloud | Atlassian Support), this flag is not allowed due to security reasons. But the documentation also says: 

 

These restrictions, including the restricted commands listed below, only apply to the pipelines executed on our cloud infrastructure. These restrictions don't apply to the self-hosted pipeline Runners.

 

But the documentation does not tell me how to set the priviledged flag when doing DIND (which is obviously required when running docker commands in docker) when using own docker runner (=not in cloud).



Alexander Christian
Contributor
July 18, 2024

I built my own image based in docker:dind image. I stopped playing with certificates and just configured docker's daemon.json to skip certificate checking (=insecure registry).

Pipeline yaml starts like this:

 

image: atlassian/default-image:3 

definitions:
  services:
    docker:
      image:
        name: dockerreg.mycompany.com/mydind:latest
        username: $DOCKER_USER
        password: $DOCKER_PASS

Now I can use the docker service in the pipeline with MY docker variant... Works quite well so far. Now fighting with certificate issues on a different level :-)

Like Syahrul likes this

Suggest an answer

Log in or Sign up to answer
DEPLOYMENT TYPE
CLOUD
TAGS
AUG Leaders

Atlassian Community Events