Create
cancel
Showing results for 
Search instead for 
Did you mean: 
Sign up Log in
Deleted user
0 / 0 points
Next:
badges earned

Your Points Tracker
Challenges
Leaderboard
  • Global
  • Feed

Badge for your thoughts?

You're enrolled in our new beta rewards program. Join our group to get the inside scoop and share your feedback.

Join group
Recognition
Give the gift of kudos
You have 0 kudos available to give
Who do you want to recognize?
Why do you want to recognize them?
Kudos
Great job appreciating your peers!
Check back soon to give more kudos.

Past Kudos Given
No kudos given
You haven't given any kudos yet. Share the love above and you'll see it here.

It's not the same without you

Join the community to find out what other Atlassian users are discussing, debating and creating.

Atlassian Community Hero Image Collage

pipe: atlassian/kubectl-run:1.1.2 dont recognize oci

My k8 cluster is hosted in oracle cloud. I have baseencoded by k8 config file but looks like bitbucket pipe dont recognize my oci k8 cluster. 

Can you please advise. 

Thank you. 

 

WARNING: "/" is not allowed in kubernetes labels. Slashes will be replaced by a dash "-" in the "bitbucket.org/bitbucket_commit" label value.For more information you can check the official kubernetes docshttps://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#syntax-and-character-setUnable to connect to the server: getting credentials: exec: exec: "oci": executable file not found in $PATH✖ kubectl apply failed.

2 answers

@Srikanth Mamidala looks like kubectl CLI itself does not recognize your cluster, because this error comes from kubectl apply command result and we just log it in the pipe.

If you could share example (non-live, with non-existing names, but similar to what you have) of your config file to be applied in k8s, that one where oci cluster is defined,

in that case we could help you and investigate it further.

For now I suspect that it cannot find executable oci in bin. I suspect this is because you don't have OCI cli there because it is not in pipe's dockerfile, so it is just not installed.

If you share somehow part of your k8s config , we could think more how we could help you. Perhaps, we will discuss a change to install oci cli, so kubectl can use it.

 

However, if we don't accept such change, you can always create custom pipe, forking kubectl-run pipe, and install oci there in your own Dockerfile.

We have a doc for creating custom pipes https://support.atlassian.com/bitbucket-cloud/docs/write-a-pipe-for-bitbucket-pipelines/.

 

Regards, Galyna

my k8 config is somethine like below but its base64 encoded and saved as KUBE_CONFIG variable in bitbucket repo

 

---
apiVersion: v1
kind: ""
clusters:
- name: cluster-cpYYYYYYY
cluster:
server: https://xxx.XX3.X3.XXXX
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURpVENDQW5HZ0F3SUJBZ0lSQUxneEFpc3nVDgzCmdPSkFkTVdXWjBNMExiKzduL3ZjZkRpNHIzWkorZEhoTjBqdmtBdW1oMVZhMlRYVzI2VE5XaXhvZkN1VwotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
users:
- name: user-cpxaw
exec:
apiVersion: client.authentication.k8s.io/v1beta1
command: oci
args:
- ce
- cluster
- generate-token
- --cluster-id
- ocid1.cluster.oc1.iad.a
- --region
- us-ashburn-1
env: []
contexts:
- name: context-cpxa
cluster: cluster-cpxa
user: user-c
current-context: context-cpx

@Galyna Zholtkevych hi do you have any update on this request? 

@Srikanth Mamidala I hope certificate is not real. We will invedtigate this and discuss within a team. Thanks

@Galyna Zholtkevych  would you have any ETA on this when atlassian could work on this. Our bitbucket pipelines integration with OCI isnt working because of this issue. 

@Srikanth Mamidala we discussed in a team your case. A nice approach to support this is to allow to mount additional dependencies binaries to docker container (for example, you could install oci cli and mount to docker container oci binary.

You could vote for such ticket in Bitbucket Pipeline Jira https://jira.atlassian.com/browse/BCLOUD-20986

or create a new suggestion request, if it does not cover your needs.

However, until it is not solved yet and gathering interest from customers, we also can recommend to create a custom pipe

You can refer to this doc when writing new pipe. https://support.atlassian.com/bitbucket-cloud/docs/write-a-pipe-for-bitbucket-pipelines/

 

For this case the pipe creation process would be simple: you could just fork a repo, follow contributing guides how we make new changes and change dockerfile to install oci-cli in image. And push your changes to your forked repo. If you have more questions how to do this, contact us back

Regards, Galyna

@Srikanth Mamidala I explained the request there since it is quite common.

Like Srikanth Mamidala likes this

@Galyna Zholtkevych I have tried to fork and build the kube image locally and it fails with below error. I am not sure why its not able to find requirements.txt although it copied it to the image and I didnt change anything in docker file except one line to add cli.  I am running from git bash(windows)

 

#4 [stage-1 2/5] COPY requirements.txt /
#4 sha256:7e90f203d74d2322680105a54a98b5ad860afeb667f6937b08b83a5725272f1c
#4 CACHED

#5 [stage-1 3/5] RUN pip install --no-cache-dir -r requirements.txt && apt-get update && apt-get install --no-install-recommends -y apt-transport-https=1.8.* gnupg=2.* curl=7.* && curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - && echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | tee -a /etc/apt/sources.list.d/kubernetes.list && apt-get update && apt-get install --no-install-recommends -y kubectl=1.18.* && apt-get clean && rm -rf /var/lib/apt/lists/*
#5 sha256:60067c3bd4bcbd35ab7b4758389973cb27f314d60a9e302bb158c4988fcecb9b
#5 2.207 Could not open requirements file: [Errno 2] No such file or directory: 'requirements.txt'
#5 2.410 You are using pip version 19.0.3, however version 21.1.1 is available.
#5 2.410 You should consider upgrading via the 'pip install --upgrade pip' command.
#5 ERROR: executor failed running [/bin/bash -o pipefail -c pip install --no-cache-dir -r requirements.txt && apt-get update && apt-get install --no-install-recommends -y apt-transport-https=1.8.* gnupg=2.* curl=7.* && curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - && echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | tee -a /etc/apt/sources.list.d/kubernetes.list && apt-get update && apt-get install --no-install-recommends -y kubectl=1.18.* && apt-get clean && rm -rf /var/lib/apt/lists/*]: exit code: 1
------
> [stage-1 3/5] RUN pip install --no-cache-dir -r requirements.txt && apt-get update && apt-get install --no-install-recommends -y apt-transport-https=1.8.* gnupg=2.* curl=7.* && curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - && echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | tee -a /etc/apt/sources.list.d/kubernetes.list && apt-get update && apt-get install --no-install-recommends -y kubectl=1.18.* && apt-get clean && rm -rf /var/lib/apt/lists/*:
------
executor failed running [/bin/bash -o pipefail -c pip install --no-cache-dir -r requirements.txt && apt-get update && apt-get install --no-install-recommends -y apt-transport-https=1.8.* gnupg=2.* curl=7.* && curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - && echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | tee -a /etc/apt/sources.list.d/kubernetes.list && apt-get update && apt-get install --no-install-recommends -y kubectl=1.18.* && apt-get clean && rm -rf /var/lib/apt/lists/*]: exit code: 1

@Srikanth Mamidala we are refactoring our pipes to use absolute paths, actually, this is a good practice.

But anyway I've just built both versions with absolute paths and not , and it is working.

I can say more if you could share your Dockerfile.

Also, the question: have you followed the structure that kubectl-run initial pipe proposes or did you remove some files and reformat the file structure?

 

Regards, Galyna

hi @Galyna Zholtkevych I could not find the structure to run the initial pipe. Where is that defined? 

No I didnt remove or change any file structure. All I did was fork and run without editing to be able to run the image(in git bash) once and explore the image in interactive mode. I get below errors

 

- docker build -t test-image .

 

#7 15.35 Successfully built bitbucket-pipes-toolkit Cerberus PyYAML
#7 15.57 Installing collected packages: smmap, six, urllib3, python-dateutil, jmespath, idna, gitdb, docutils, chardet, certifi, websocket-client, requests, pyasn1, gitdb2, docker-pycreds, botocore, s3transfer, rsa, PyYAML, GitPython, docker, colorlog, colorama, Cerberus, bitbucket-pipes-toolkit, awscli
#7 20.95 Successfully installed Cerberus-1.2 GitPython-3.0.8 PyYAML-5.1.2 awscli-1.16.279 bitbucket-pipes-toolkit-1.14.2 botocore-1.13.15 certifi-2020.12.5 chardet-4.0.0 colorama-0.4.1 colorlog-4.0.2 docker-3.7.0 docker-pycreds-0.4.0 docutils-0.15.2 gitdb-4.0.7 gitdb2-4.0.2 idna-2.10 jmespath-0.10.0 pyasn1-0.4.8 python-dateutil-2.8.0 requests-2.25.1 rsa-3.4.2 s3transfer-0.2.1 six-1.16.0 smmap-4.0.0 urllib3-1.25.11 websocket-client-0.59.0
#7 20.95 WARNING: Running pip as root will break packages and permissions. You should install packages reliably by using venv: https://pip.pypa.io/warnings/venv
#7 21.76 Get:1 http://security.debian.org/debian-security buster/updates InRelease [65.4 kB]
#7 21.77 Get:2 http://deb.debian.org/debian buster InRelease [121 kB]
#7 21.95 Get:3 http://deb.debian.org/debian buster-updates InRelease [51.9 kB]
#7 22.24 Get:4 http://deb.debian.org/debian buster/main amd64 Packages [7907 kB]
#7 27.75 Reading package lists...
#7 28.49 E: Release file for http://security.debian.org/debian-security/dists/buster/updates/InRelease is not valid yet (invalid for another 7h 45min 8s). Updates for this repository will not be applied.
#7 28.49 E: Release file for http://deb.debian.org/debian/dists/buster-updates/InRelease is not valid yet (invalid for another 7h 17min 27s). Updates for this repository will not be applied.
#7 ERROR: executor failed running [/bin/bash -o pipefail -c pip install --no-cache-dir -r requirements.txt && apt-get update && apt-get install --no-install-recommends -y apt-transport-https=1.8.* gnupg=2.* curl=7.* && curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - && echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | tee -a /etc/apt/sources.list.d/kubernetes.list && apt-get update && apt-get install --no-install-recommends -y kubectl=1.18.* && apt-get clean && rm -rf /var/lib/apt/lists/*]: exit code: 100
------
> [3/5] RUN pip install --no-cache-dir -r requirements.txt && apt-get update && apt-get install --no-install-recommends -y apt-transport-https=1.8.* gnupg=2.* curl=7.* && curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - && echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | tee -a /etc/apt/sources.list.d/kubernetes.list && apt-get update && apt-get install --no-install-recommends -y kubectl=1.18.* && apt-get clean && rm -rf /var/lib/apt/lists/*:
------
executor failed running [/bin/bash -o pipefail -c pip install --no-cache-dir -r requirements.txt && apt-get update && apt-get install --no-install-recommends -y apt-transport-https=1.8.* gnupg=2.* curl=7.* && curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - && echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | tee -a /etc/apt/sources.list.d/kubernetes.list && apt-get update && apt-get install --no-install-recommends -y kubectl=1.18.* && apt-get clean && rm -rf /var/lib/apt/lists/*]: exit code: 100

@Srikanth Mamidala now it seems it will be another error.

Could you share your Dockerfile as text, so I can test it as pure docker image?

BTW, whatever flow you choose to deploy, e.g. ServiceAccount, Role/RoleBinding , that you were proposed, you can form it in the pipe also, it will make your pipeline look much cleaner.

I am ready to help to fix this.

If your repository is public , I can look at this and see what is wrong. If not , just give me read access (gzholtkevych@atlassian.com)

Regards, Galyna

Hi @Galyna Zholtkevych I got past that error. I am able to build the image fine but the pipe I am using is still not able to recognize oci-cli. I am unable to login to the container to verify the installation but only way I am able to verify is by using the newly built pipe in the pipeline.  I have invited you with write access to my personal repo. 

If you see something wrong with docker file. Please feel free to make edits.  What would be the command to run container interactively and verify the image I created. 

Also, for OCI cli we need a config file at the ~/.oci which stores oci user data.  I can copy that file to the root along with private key. I didnt get to that stage yet. Once oci is recognized I thought I could add those lines to docker file

Thank you. 

 

@Galyna Zholtkevych Please ignore my previous message. I got past that error and was able to deploy fine using the clusterbinding custom variable. 

But now I am running in to a strange issue I created 2 configs for 2 clusters. One is deploying fine but other is throwing the error as below. Its the same build but different variables for different cluster. Thoughts? Yaml validation is fine as I am able to deploy it fine with other config and manually as well. 

 

Status: Downloaded newer image for bitbucketpipelines/kubectl-run:2.0.0INFO: Configuring kubeconfig...error: error loading config file "/tmp/kube_config": yaml: line 6: could not find expected ':'✖ spec file validation failed.

@Srikanth Mamidala

I wonder if you still use our pipe or created a fork? I see you downloaded newest version of our kubectl-run.

 

it says

`error loading config file "/tmp/kube_config`

So looks like, something in your config is not all right.

Remember that we put to /tmp/kube_config decoded of base64 string.

So you put to repository variable KUBE_CONFIG

base64encoded string of your kube config file (code https://bitbucket.org/atlassian/kubectl-run/src/b1f6411337267383ebfa5d6eed00ae80664e3282/pipe/pipe.py#lines-141).

Also, I need to know what config do you mean, is it KUBECONFIG that is needed for oracle accessing cluster?.

 

I see in oracle doc,  that you need to configure it in KUBECONFIG file exactly  https://docs.oracle.com/en-us/iaas/Content/ContEng/Tasks/contengaccessingclusterkubectl.htm, so I guess, this is KUBECONFIG variable.

So recheck your config, looks like it has some issues when kubectl tries to validate your kube config.

You also can put some debug things to the pipe. Feel free also to rewrite your pipe to language that it is easy to develop in for you, since it is your pipe. Perhaps, like this it will be easy for you to debug. You could use even bash, but Python and any language that you like, we anyway will be happy to help you in any solution you choose.

Regards, Galyna

0 votes
vandervl I'm New Here May 17, 2021

@Srikanth Mamidala - Rather than using relying on the `oci` CLI and associative `kubectl` authentication, would you consider trying the (recommended) approach of using a ServiceAccount, Role/RoleBinding for your pipeline access? A related blog here

hi @vandervl I certainly give a try if thats the recommended approach. I quickly read through the documentation and it looks like integration is setup between github and OKE. What would be equivalent setup for the adding secret (step 6)  in bitbucket pipelines  ? 

vandervl I'm New Here Jun 15, 2021

Hi @Srikanth Mamidala are you still stuck on this at all? Were you able to get it going using a Service Account and associative `KUBE_CONFIG`? 

No @vandervl  I am all set by using service account. Thanks for your help. 

Like vandervl likes this

Suggest an answer

Log in or Sign up to answer
TAGS
Community showcase
Published in Bitbucket Pipelines

Bitbucket Pipelines Runners is now in open beta

We are excited to announce the open beta program for self-hosted runners. Bitbucket Pipelines Runners is available to everyone. Please try it and let us know your feedback. If you have any issue...

2,406 views 51 18
Read article

Community Events

Connect with like-minded Atlassian users at free events near you!

Find an event

Connect with like-minded Atlassian users at free events near you!

Unfortunately there are no Community Events near you at the moment.

Host an event

You're one step closer to meeting fellow Atlassian users at your local event. Learn more about Community Events

Events near you