You're on your way to the next level! Join the Kudos program to earn points and save your progress.
Level 1: Seed
25 / 150 points
1 badge earned
Challenges come and go, but your rewards stay with you. Do more to earn more!
What goes around comes around! Share the love by gifting kudos to your peers.
Keep earning points to reach the top of the leaderboard. It resets every quarter so you always have a chance!
Join now to unlock these features and more
Kubernetes cluster is present in Azure (AKS). Deployed CONFLUENCE and BITBUCKET using HEM CHARTS.
We are security hardening the pods in our cluster.
Running the following command shows BITBUCKET and CONFLUENCE pods violate the PodSecurity enforcement.
kubectl label --dry-run=server --overwrite ns --all pod-security.kubernetes.io/enforce=restricted
Result of the above is as follows:
Warning: bitbucket-0 (and 4 other pods): allowPrivilegeEscalation != false, unrestricted capabilities, runAsNonRoot != true, seccompProfile
Warning: confluence-0 (and 1 other pod): allowPrivilegeEscalation != false, unrestricted capabilities, runAsNonRoot != true, runAsUser=0, seccompProfile
The following HELM config values break the POD completely. POD goes into error state and will not run at all.
What are the HELM config values to enforce POD security hardening?
Any help is highly appreciated.
Thanks in advance.
Make sure you add runAsUser: 2002 to securityContext
You will also want to set https://github.com/atlassian/data-center-helm-charts/blob/main/src/main/charts/confluence/values.yaml#L834 to true if you are running as non root.
@rughvi8784 Just keep adding missing restrictions to securityContext. Btw, the admission controller may not like init containers as well. Do you have nfs fixer init container enabled in volumes.sharedHome? You can try setting securityContext for the entire pod (confluence.securityContext)
You need to figure out the missing parts of securityContext now that you know how to run the container as nonRoot.
@Yevhen how do I runAsNonRoot for init containers?
I can see the following config value
Confluence.securityContext - this is set now. Still the pod security VIOLATION happening.
Confluence.containerSecurityContext - this breaks the pod.
Synchrony.securityContext - not set at the moment.
Synchrony.containerSecurityContext - not set at the moment.
Unfortunately, securityContext for init containers can't be set. However, nfs-permission fixer init container can be disabled (it fixes permissions for shared home volume, and it's not always necessary)
Some good news.
Following settings have been used
The POD security violations are no more for confluence.
When I run the command
k label --dry-run=server --overwrite ns --all pod-security.kubernetes.io/enforce=restricted
It will not complain about Confluence. That is what we need.
Confluence pods are running and confluence application is fine.
I can log into confluence portal and everything seems fine.
But when I see the logs of confluence POD, I spotted an error.
Should I worry about this? Will it cause any issue?
What does that error represent?
Thanks a lot for your answers. JIRA, CONFLUENCE and BITBUCKET are working as expected. (Happy path is fine). We'll do more testing on these.
I am keen to understand more about NFSPERMISSIONFIXER. What is the use of this and when do we need that. For now we have disabled that.
Do you think disabling that will affect POD functioning in any scenarios? Like in EKS or on premise Kubernetes cluster etc.
Also can you please shed some light on the performance impacts for disabling NFSPERMISSIONFIXER.
@rughvi8784 this init container has been added to address permissions issue that you `might` have with nfs PVs. You can disable it, and if products can run without fatal errors (caused by inability to write to shared home), then you're good to go.