You're on your way to the next level! Join the Kudos program to earn points and save your progress.
Level 1: Seed
25 / 150 points
1 badge earned
Challenges come and go, but your rewards stay with you. Do more to earn more!
What goes around comes around! Share the love by gifting kudos to your peers.
Keep earning points to reach the top of the leaderboard. It resets every quarter so you always have a chance!
Join now to unlock these features and more
Since I was not able to find much information about how to deploy bitbucket on an openshift cluster, I would like to share valuable insights I had during the process.
The initial setup was easy. Simply navigating the Openshift wizard for creating a deployment configuration and referencing the dockerhub image atlassian/bitbucket-server created most of the api objects needed for a deployment.
SSL secured connection
Openshift routes provide the option for edge termination. This corresponds somewhat to the reverse proxy setup for bitbucket server. Accordingly, the bitbucket properties need to be set. This can be achieved by seeding the properties as runtime environment variables. The following variables were needed to enable the https schema with the edge termination option of the route:
- name: SERVER_PORT
- name: SERVER_SCHEME
- name: SERVER_SECURE
- name: SERVER_PROXY_PORT
- name: SERVER_PROXY_NAME
Openshift Runtime User
Openshift requires images to be written in a certain way. One particular requirement is that images need to be written to support arbitrary user IDs (Link to openshift documentation). The docker image by atlassian has a bunch of folder paths written to be owned by the user 'bitbucket'. Openshift starts containers with a random userid.This causes the problem that the user does not have write permission to the folders it needs to write to.
Instead of rewriting the image and change the group ownership of the directories that need to have write permission for the runtime user, it is also possible to mount volumes to these directories. The arbitrary runtime user is the owner of the directories that are mounted at runtime. So far I identified two separate directories that need to be owned by the runtime user.
Filestorage Type Persisten volume
Apparently, bitbucket and elastic search have a problem to deal with the glusterfs storage file system. A recommended storage type is gp2.
Java truststore for application links
1. Download the relevant certificates to a file
openssl s_client -showcerts -connect jira.server:443 </dev/null 2>/dev/null|openssl x509 -outform PEM > jira.cert
2. Create the truststore
keytool -import -file cert.crt -alias cert -keystore bitbucket.truststore
This command will require you to set a password, reenter it several times and confirm to proceede between the different import steps.
Make sure to take note of the password and add it to the environment variable JVM_SUPPORT_RECOMMENDED_ARGS as -Djavax.net.ssl.trustStorePassword=password !
3. Create a config map containing the binary
oc create configmap bitbucket-truststore --from-file=bitbucket.truststore
4. Mount the truststore from configmap as file. This requires to add the configmap as volume to the pod and mount the volume to the container. Furthermore, it is necessary to provide the path to and password for the truststore to the custom java args environment variable for bitbucket server JVM_SUPPORT_RECOMMENDED_ARGS.
The liveness probe can use a http GET request to the /status url.
Currently, I still did not manage to setup a working readiness. The naive probes that are suggested by the Openshift UI as GET request on container port 7990 kept the application unavailable.