Create
cancel
Showing results for 
Search instead for 
Did you mean: 
Sign up Log in

Runner K8s: Unable to access jarfile <OAUTH_CLIENT_ID>

Mike Clarke
I'm New Here
I'm New Here
Those new to the Atlassian Community have posted less than three times. Give them a warm welcome!
June 10, 2024

I'm running a self hosted runner in Kubernetes. But it stopped working a few days ago. Today I tried to create a new runner. But it crashes on this error 'Unable to access jarfile <OAUTH_CLIENT_ID>'. I've used this as a basis: https://support.atlassian.com/bitbucket-cloud/docs/deploying-the-docker-based-runner-on-kubernetes/

Anyone an idea how to solve this. 

1 answer

0 votes
Patrik S
Atlassian Team
Atlassian Team members are employees working across the company in a wide variety of roles.
June 11, 2024

Hello @Mike Clarke ,

and welcome to the Community!

Unfortunately I was not able to reproduce this error, but since this message seems to be thrown during the initialization of the runner container, I suspect the issue might be a corrupted deployment or image. 

I was able to have a successful deployment using the following kubernetes YAML file : 

apiVersion: v1

kind: List

items:

  - apiVersion: v1
    kind: Secret
    metadata:
      name: runner-oauth-credentials
#      labels:
#        accountUuid: # Add your account uuid without curly braces to optionally allow finding the secret for an account
#        repositoryUuid: # Add your repository uuid without curly braces to optionally allow finding the secret for a repository
#        runnerUuid: # Add your runner uuid without curly braces to optionally allow finding the secret for a particular runner
    data:
      oauthClientId: "abcd1234abcd1234" # base64 encoded OAUTH_CLIENT_ID found in runner's setup command
      oauthClientSecret: "qwerrt==1234qwert1234" # base64 encoded OAUTH_CLIENT_SECRET found in runner's setup command
  - apiVersion: batch/v1
    kind: Job
    metadata:
      name: runner
    spec:
      template:
#        metadata:
#          labels:
#            accountUuid: # Add your account uuid without curly braces to optionally allow finding the pods for an account
#            repositoryUuid: # Add your repository uuid without curly braces to optionally allow finding the pods for a repository
#            runnerUuid: # Add your runner uuid without curly braces to optionally allow finding the pods for a particular runner
        spec:
          containers:
            - name: runner
              image: docker-public.packages.atlassian.com/sox/atlassian/bitbucket-pipelines-runner
              imagePullPolicy: Always
              env:
                - name: ACCOUNT_UUID
                  value: "{abc1234-abcd-abc123-abcd-870c20dddbdb}" # The account UUID found in runner's setup command
                - name: RUNNER_UUID
                  value: "{abc1234-abcd-abc123-abcd-90853d2c20f1}" # The runner UUID found in the runner's setup command
                - name: OAUTH_CLIENT_ID
                  valueFrom:
                    secretKeyRef:
                      name: runner-oauth-credentials
                      key: oauthClientId
                - name: OAUTH_CLIENT_SECRET
                  valueFrom:
                    secretKeyRef:
                      name: runner-oauth-credentials
                      key: oauthClientSecret
                - name: WORKING_DIRECTORY
                  value: "/tmp"
              volumeMounts:
                - name: tmp
                  mountPath: /tmp
                - name: docker-containers
                  mountPath: /var/lib/docker/containers
                  readOnly: true # the runner only needs to read these files never write to them
                - name: var-run
                  mountPath: /var/run
            - name: docker-in-docker
              image: docker:20.10.5-dind
              imagePullPolicy: Always
              securityContext:
                privileged: true # required to allow docker in docker to run and assumes the namespace your applying this to has a pod security policy that allows privilege escalation
              volumeMounts:
                - name: tmp
                  mountPath: /tmp
                - name: docker-containers
                  mountPath: /var/lib/docker/containers
                - name: var-run
                  mountPath: /var/run
          restartPolicy: OnFailure # this allows the runner to restart locally if it was to crash
          volumes:
            - name: tmp # required to share a working directory between docker in docker and the runner
            - name: docker-containers # required to share the containers directory between docker in docker and the runner
            - name: var-run # required to share the docker socket between docker in docker and the runner
        # backoffLimit: 6 # this is the default and means it will retry upto 6 times if it crashes before it considers itself a failure with an exponential backoff between
        # completions: 1 # this is the default the job should ideally never complete as the runner never shuts down successfully
        # parallelism: 1 # this is the default their should only be one instance of this particular runner

It's important to note that ACCOUNT_UUID and RUNNER_UUID should be put into curly braces, while oauthClientId and oauthClientSecret don't.

Also, you can see that I explicitly instructed Kubernetes to always pull the image on the pod start by adding the property

 imagePullPolicy: Always

under each of the container's definitions.

You can try to include that in your deployment as well. If you are using some external tool, such as Minikube, to deploy the runner, the tool may have its own image cache, so make sure to remove any cached images and then re-deploy your runner's pods so it pulls a fresh runner's image when spinning up the containers.

Let us know in case you have any questions.

Thank you, @Mike Clarke !

Patrik S 

Suggest an answer

Log in or Sign up to answer
DEPLOYMENT TYPE
CLOUD
PRODUCT PLAN
STANDARD
PERMISSIONS LEVEL
Product Admin
TAGS
AUG Leaders

Atlassian Community Events