Hello,
I am trying to understand how the customized pipelines works. Ther problem I am facing is that I am unable to set the access credentials to access my private docker registry in the pipe.yml file.
I have my pipe build in a `pipeline-alfa` repository. The pipe.yml contains:
name: My pipeline-alfa
image:
name: registry.my-registry.com/pipeline-alfa:latest
username: basic-auth-user
password: ***basic-auth-pass****
email: user@foo.com
category: Utilities
description: Showing how easy is to make pipes for Bitbucket Pipelines.
variables:
- name: ENV_NAME
default: ‘myProduction’
repository: https://bitbucket.org/foo/pipeline-alfa
maintainer:
name: user
website: https://foo.com
email: contact@foo.com
vendor:
name: Demo
website: https://example.com/
email: contact@example.com
tags:
- helloworld
- example
I am running this pipeline from my `repository-main` as follows:
...
'**':
- step:
script:
- pipe: foo/pipeline-alfa:1.0.6
I can see that this work as expected. The pipe.yml from the pipeline-alfa repository is picked and on my private docker registry host I can see (nignx access log):
xx.xx.xx.xx - - [27/Jul/2022:19:15:02 -0400] "GET /v2/ HTTP/1.1" 401 188 "-" "docker/20.10.15 go/go1.17.9 git-commit/4122ba1 kernel/5.10.101 os/linux arch/amd64 UpstreamClient(Docker-Client/20.10.15 \x5C(linux\x5C))"
bitbucket pipeline run log is saying:
...
Unable to find image 'registry.my-registry.com/pipeline-alfa:latest' locallydocker: Error response from daemon: Head "https://registry.my-registry.com/v2/pipeline-alfa/manifests/latest": no basic auth credentials.
See 'docker run --help'.
- Why username and password are not provided for the docker registry?
- Are the private registries not supported?
Related documents:
https://support.atlassian.com/bitbucket-cloud/docs/use-docker-images-as-build-environments/
https://support.atlassian.com/bitbucket-cloud/docs/write-a-pipe-for-bitbucket-pipelines/
@szumak hi. Thanks for your question.
Please check this guide how to use private docker images for steps in bitbucket-pipeline.yml. So as workaround you can unwrap your logic without using pipe and put it explicitly to yml config.
From our side will check if it's possible to use private docker images in pipes and will notify you.
Regards, Igor
Hello Igor,
Thank you for your answer. The document you suggesting is the one I've used to build my configuration. The section 'Images hosted on other registries' is showing a following format for a docker registry authentication:
image:
name: docker.your-company-name.com/account-name/openjdk:8
username: $USERNAME
password: $PASSWORD
email: $EMAIL
But it doesn't work in my case.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
@szumak i think in your case it's not working because you try to run pipe in bitbucket-pipelines.yml:
script:
- pipe: foo/pipeline-alfa:1.0.6
What i suggest you is to not use pipe, because for now pipes is not officially supported with private docker images, but to unwrap your logic form the pipe and use it in bitbucket-pipelines.yml config file where you run your pipeline:
image:name: docker.your-company-name.com/account-name/openjdk:8 username: $USERNAME password: $PASSWORD email: $EMAIL
test: &test
step:
name: test
script:
- <your logic here. Not the pipe. In example: echo test>
pipelines:
default:
- <<: *test
Please, check if this works for you.
Regards, Igor.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Thank you Igor,
Here is the solution I found,
0. Variables
We need to set AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY and AWS_DEFAULT_REGION in the repository or workspace variables
1. pipeline image
We need to prepare an image that will login us into ECR i.e.
...
COPY ecr_login.sh /usr/local/bin/ecr_login.sh
RUN echo "/usr/local/bin/ecr_login.sh" >> ~/.bashrc
ecr_login.sh script will contain something like:
PASSWORD=$(aws ecr get-login-password)
ACCOUNT_ID=$(aws sts get-caller-identity --query "Account" --output text)
echo "$PASSWORD" | docker login --password-stdin -u AWS ${ACCOUNT_ID}.dkr.ecr.${AWS_DEFAULT_REGION}.amazonaws.com
When Bitbucket runs the container it provides:
--volume=/usr/local/bin/docker:/usr/local/bin/docker:ro \
this is why you don't have to worry about the docker command.
Build it and push into ECR. I am going to call it: YOUR_PRIVATE_IMAGE_FROM_PT1:latest
2. main repository
This is our repository where the pipe is called
image:
name: YOUR_PRIVATE_IMAGE_FROM_PT1:latest
aws:
access-key: $AWS_ACCESS_KEY_ID
secret-key: $AWS_SECRET_KEY_ACCESS_KEY
YOUR_PRIVATE_IMAGE_FROM_PT1:latest - this image will be successfully pulled from ECR. There is no problem with accessing the ECR here.
Next, in that same bitbucket-pipelines.yml file we have:
branches:
dev:
- step:
script:
- pipe: my/pipe-reposiotry:master
variables:
<<: *some-pipe-variables
3. pipe-repository
Here we will have our pipe.yml file with followind code:
name: My pipeline-alfa
image:
name: YOUR_PRIVATE_PIPE_IMAGE_ON_ECR:latest
Here, there is no need for AWS credentials because, first they will be ignored and second we are already logged into the docker registry by the image we build at step 1 during the bitbucket-pipelines.yml execution from main repository (pt2).
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.