Create
cancel
Showing results for 
Search instead for 
Did you mean: 
Sign up Log in

Why Kubernetes?

Marek Piotrowski-Beyer September 30, 2021

Hi!

I am just facing the question why I should run e.g. Jira and Confluence in Kubernetes?

To be honest, I don't know much about Kubernetes, but I think I understand that there are advantages on this platform in terms of running microservices, fast provisioning of resources, scaling and so on.

Which of these advantages or other advantages do I have in relation to Jira and Confluence? My team and I are currently considering whether it makes sense to run these services in AWS, e.g. under EKS.

I would appreciate any opinions on this!

2 answers

Suggest an answer

Log in or Sign up to answer
3 votes
Brad Taplin September 30, 2021

/ramble on/

I have never deployed these or any apps via Kubernetes, but have been reading up on the value and about Atlassian's young efforts to make it all work, and over a year ago saw a sort of "Potemkin Village" demo deployment of a single JIRA instance (not Data Center, aka DC) by Praqma. That demo was impressive but left open many questions about things that might not be maintained in the same way, including database state and SSL cert integration.

Assuming that clever use of Helm + other scripting can overcome obstacles, such that an actual enterprise DC instance might be reliably maintained this way, my question becomes whether it's necessarily better to replace a broken car - if cars are easily and almost freely acquired - or to fix/update a slightly-borked car (deployed app server) you already have?

The benefits of Kubernetes, in my mind, are in two areas:

First, if maintenance of many app instances can be automated this way that should, in theory, enable subject matter experts (SMEs) to manage more instances at a time and/or do other things.

In a role before my current one I helped maintain half a dozen Atlassian apps - each with multiple prod and nonprod deployed configs - on close to a hundred servers. That was getting old, begging for some automation, though every deployment automation system - Ansible, Jenkins, Puppet, Chef, Kubernetes, homegrown scripting - brings its own issues. Back to the fixing the car, automatically or hands-on, vs. replacing with a new one whose config was tweaked and tested back in the shop. That prep and test happens somewhere.

Second, there may be less security risk if next to nobody, not even senior app admins, can easily get command-line privileges in prod, in part because with containers there may be almost no need - in theory. I assume here Linux-based deployments, which has been the primary focus of this containerization, though Microsoft has done with Windows as well and their Azure may be a good platform for apps running on Windows or Linux. Anyhow...

It all boils down to state management and limiting access to deployed production app servers. JIRA is a finicky, very configurable beast in any large setting, may need a lot of babysitting by folks like us unless things are standardized, automated, locked down. All those elements are prerequisite to a successful Kubernetes environment.

One wrinkle is that new JIRA and Confluence versions come out very often, and some new releases must be deployed quickly to remediate critical just-discovered vulnerabilities. So any Kubernetes enterprise ecosystem must enable not only quick deployment but also quick revisions of what gets deployed, whole new containers and/or tweaks made in Helm or the like, meaning add a skillset or some additional advanced staffing.

To me, at its heart Kubernetes + containers is mainly about drift management. If new requirements require frequent config tweaks and troubleshooting on deployed boxes the benefits are debatable. In my experience random drift management in config files is rarely the problem; rather external factors like changing usage, environments, and/or new vulnerabilities and requirements demand tweaking XYZ. Whether you tweak XYZ on the deployed box or upstream in a container or delivery toolset depends on skills, infrastructure, proven methods.

My employer is really into Kubernetes now, at least for some apps, though maybe even more committed - in public - to moving apps to the public clouds of Google and Microsoft. For various reasons AWS is not one of our targets, which eliminates the most popular IaaS target for JIRA, but Azure could work. And Kubernetes orchestration of Atlassian app maintenance in Azure (or AWS) is possible. Whether that's a good IaaS model is TBD.

Atlassian themselves will strongly favor just moving your workload to their own cloud servers, and much of the new dev work is cloud-first (their cloud, not IaaS), maybe Data Center second for those of us with huge investments, and doing Data Center in IaaS a growing but somewhat terciary supported option. How Kubernetes fits either on-prem DC or DC on your cloud-based virtuals is very much TBD, but it will take a lot of elbow grease, expertise to come, and some faith. I for one prefer simpler solutions but must learn whatever we are told we will have to understand.

There are no silver bullets, only ways to improve. You pick the what and how you automate.

 

/ramble off/

Marek Piotrowski-Beyer October 3, 2021

I also want to say thank you to Brad for your thoughts and facts on the subject.

The point that appealed to me the most was that not everyone can fiddle around with the system with "rights" without further ado and that all changes only take place in the container. This is dan appropriate knowledge is necessary. How would I be able to lock system changes in Jira, for example?

Like amit raj likes this
Brad Taplin October 4, 2021

Marek, first see my email below.

The "locking" of system changes amounts to first strictly controlling who can get command-line access to root and the account running the app. Relatively simple Linux controls. If an admin cannot even su into the account running the app, that app won't be touched in that way.

An alternative would be for an organization to enforce a strict break-glass scenario, in which rights are only granted for a limited time and purpose to an authorized admin to fix something if/when Kubernetes cannot do so, or cannot do so fast enough. But the idea would be to make that very rare.

Not that I am not advocating this as a best approach. Where everything works as designed it might be, but everywhere I have worked over three decades systems occasionally misbehave or break. I'd expect the same with Kubernetes unless it is maintained perfectly and has ample hardware, staffing, resources to accommodate not only initial needs but also potential growth and change.

That is, the more you rely on central services, the more reliable those had better be.

2 votes
amit raj September 30, 2021

I will take a step back and answer why we do containerize applications(JIRA and Confluence in this case)? 

1- containers are easy to install( actually you just deploy a container image) now the advantage is you don't have to manually install the application at all, you just run the image provided by Atlassian.  

2- Since you have a single image you can run replicas of it very easily,  no need to provision a VM, and install the application again and add to the cluster. your efforts in provisioning and maintaining a VM are saved. 

3- Ease of Scale- you could easily scale the application, as you just need to increase replica sets in K8s(multiple copies of same container image)

4- Ease of Upgrade- you simply need to update a single image and deploy, you dont have to manually go to each VM and upgrade each node.

5- Infra as Code- probably you can write terraform code for your EKS atlassian cluster and any changes in the infra go through code review process.

I am certainly missing a lot of other advantages, you could just google advantages of Kubernetes and advantages of EKS you will find many more advantages.

Thank you

Charlie Misonne
Community Leader
Community Leader
Community Leaders are connectors, ambassadors, and mentors. On the online community, they serve as thought leaders, product experts, and moderators.
September 30, 2021

And the advantage only increase in a multi node Data Center setup. With a couple of clicks/ commands you can add additional nodes to your cluster or even automate it.

Marek Piotrowski-Beyer October 3, 2021

Thank you so much for your points Amit!

I would still have some questions of understanding here.


When I make changes, for example to Jira, I always make these changes in the image? If so, do I just roll out the customized image and have the application updated and back in live mode immediately?

The same use case should apply to apps, right?

Like amit raj likes this
Brad Taplin October 4, 2021

Marek, the idea of Kubernetes - at least as explained to be - is to as fully as is possible or practical eliminate work "under the hood." In short, changes are made either to the image itself or changes that would be pushed to a deployment image, are first fully tested/validated in a nonprod (one would hope) and then delivered to the target environment with no actual touching of the target.

One still had better test upstream. For minor config changes HELM can be used to create or modify simple code that pushes configuration changes to a previously-deployed container, e.g. to modify some setting.

The devil is in the details. The mechanisms, both for allocating containers and for controlling those additional tweaks via Helm (not on targets) must be both fast and bulletproof or admins will find it tempting or necessary to bypass the system.

Like # people like this
TAGS
AUG Leaders

Atlassian Community Events