It's not the same without you

Join the community to find out what other Atlassian users are discussing, debating and creating.

Atlassian Community Hero Image Collage

Why are Atlassian products so memory hungry?

Here is an example from one of my systems running Crowd, Fisheye, Jira, Confluene:

PID USER PR NI VIRT RES SHR S %CPU %MEM
25779 fisheye 20 0 1888m 919m 22m S 3 7.6
25004 confluen 20 0 1292m 774m 18m S 0 6.4
28563 jira 20 0 1114m 691m 14m S 0 5.7
28889 crowd 20 0 1236m 381m 9584 S 0 3.2

The total reserved memory usage here is 2.7G for 4 apps that are idle with very little content. This particular system has plenty of memory to run these apps, however I also run these in the cloud where the primary component you are paying for right now is memory. Just as an example, a Rackspace cloud server with 1G of RAM runs $43/month which is enough to run Jira, however adding Crowd and Confluence forces the memory usage up over the 2G RAM server ($87.60) and into the 4G server ($174/mo).

Now I love Atlassian's software, particularly how friendly they are to OpenSource, education, small projects, and non-profits, but it seems to me now is the time for a company wide initiative to get this memory usage under control so memory usage isn't something preventing further adoption of Atlassian products.

Thanks!

6 answers

Comments for this post are closed

Community moderators have prevented the ability to post new answers.

Post a new question

Java apps are memory hungry, its one of the pain points, especially in hosted/virtualized environments.

You can monitor your per application memory usage using JConsole or similar. Depending on your usage, you may well find there are individual heaps that can be pruned to optimize memory use, however, don't expect to see significant savings.

There isn't much that can be done. If you dont want the cost, deploy locally, accept your cost in managing it...

I don't believe Java apps are inherently memory hungry, I personally write many server-grade Java applications that use far far less memory. I don't know what you mean by 'individual heaps that can be pruned', if I try and forcibly constrict heap size then I am risking OOM errors which will terminate the application, or massively degrade performance.

Fundamentally a lot can be done, just not by the end users. Each of these apps have over 100MB in used PermGen which is huge, and other than crowd, are using over 200MB of heap at idle.

If you are in control of your execution environment you will likely deploy 'the minimum' and so the footprint will be small, Ive written apps taht can run in 64mb :) You have to picka sweet spot for VM size. Too small and enough concurrent users would blow the heap, as you say, however, assigning something insane like oh, 64GB (not kidding) would result in awesome performance for days or weeks at a stretch, then grid to a halt during GC. You tune your deployment according to your expected usage.

When I say 'individual heaps can be pruned', this includes PermGen, Stack and Heap. It's an exercise: monitor, evaluate usage, configure. Personally I dont like allocating via -Xmx alone, I always earmark required memory at the start, ensuring I have the footprint I require, and not being subject to vagueness of environment. If you dont expect huge changes, you can likely trim the Permgen and the maximum heap, same for JIRA and Confluence. If you are in an unknown usage situation you may not be able to tailor exactly.

If you are in a J2EE application server, you have a greater footprint even before your app is loaded, with a web container like tomcat thats minimised but for applications like JIRA and Confluence with a lot of libraries there is a spider effect loading classes as the classloader resolves all loaded classes dependencies, overall you load more. 100mb for permgen seems viable. If you consider that JIRA and plugins 2 has isolated classloaders, that means you could load 10 plugins with library X and library X would have its classes loaded 10 times over as they are all 'different'. This is the price paid for stability in deployed plugins and platform conflict.

On the one hand, yes, Atlassian could roll their own specific 'cut down' libraries but then they would have to pay staff to develop and maintain them.. On the other hand no, End Users do have the power to not install large numbers of plugins that add to permgen, stack and heap requirements.

Like Daniel Alonso likes this

Let's not forget application caches. Let's not forget Lucene indexing. There's a price to pay and good engineering means good compromises. I do not think Atlassian made such a bad job here. Andy +1

I've never seen a small Tomcat app in a production environment. Just saying.

You should see our OpenGROK-instance... 112mb and NOT counting :) 

If you need to know "why", just take a heap dump and look at it with jmap and jhat (or your favorite profiler). What's more interesting is what can you do in your situation - you have 3 choices:

  1. Use Atlassian On Demand - no licenses, no setup, restricted plugins, might have compliance issues (depending on your industry)
  2. Deploy on local hardware (either a beefed-up desktop will do) and accept the cost of managing the machine
  3. Deploy on hosted hardware and accept the cost of the managed hardware.

Run your calculations and pick the one that suits you best.

That said, Atlassian's products are by default sized for "average" anticipated dataset. If, as you indicate they are mostly empty, feel free to experiment reducing the heap sizes - just make sure you monitor for unusually large CPU usage, coinciding with sudden degradation if response times (most likely indicating that your JVM spends most of its time garbage collecting).

I agree the On Demand offering is beginning to look interesting, the problem is it doesn't support many products (Crowd, Fisheye) yet, and isn't clear to me how it interacts with Open Source/non profit/etc organizations.

But looking at the heaps I don't think apps should be using 300-400MB of used heap/permgen at idle containing almost no data.. if my app was doing this I'd be concerned :)

Well, with limited engineering resource, one has to decide on their priorities. In the case of any company selling a product, this would be paraphrased as "what would bring us more sales".

From the look of it Atlassian seem to have focused on functionality, speed of development, scalability and resource utilization (in this order). The heap is used mostly for caching, so to give us faster responses and the perm is used by libraries which they leverage to bring us functionality in shorter timeframes. I think it's a fair tradeoff.

Ya its a fair point, I just wonder how many other folks are running this software in the cloud and having to deal with the seemingly abnormally high memory usage.

Well Confluence is a very large application with hundred (thousands?) of features. So while I agree with you about "memory hungry" to some degree it is a relative phrase.

Have you tried running SharePoint? If you give that a whirl you will think Confluence is positively light weight :) And really for what Confluence provides it is light weight. The same things are true of JIRA.

The amount of RAM you need to acheive the "sweet spot" which Andy mentions depends quite a bit too on your user load I believe. I run a pretty consistently reponsive Confluence server with 700MB of RAM in the cloud, but my user load is quite low.

"The total reserved memory usage here is 2.7G for 4 apps that are idle with very little content."

The 'very little content' isn't really relevant in my opinion - in my mind that's like saying to some degree a car's engine should weigh less because it's not moving.....all the ability and power is still there.

Still I DO know what you mean and feel your pain for running the apps in the cloud. For that much RAM I'd think you'd be better off with a dedicated hosted server - even something like this: http://macminicolo.net It would probably be quite a bit more responsive as well than some VM environment.

Hi, I have to come back to that discussion, my VM has a total of 12 GB RAM for running Jira and Confluence, it uses all of the RAM and crashes a lot. 

And it runs very slow.

Are there any tips to make it more stable?

 

Olaf   

I have a hypervisor running Atlassian applications, and if I leave them idle for a few days, their RAM usage doubles!!  I've had to allocate 6GB of RAM for each VM running Confluence and JIRA, with no plugins installed beyond the default configuration.  This seems excessive!

is there anybody who can comment on this issue?

Comments for this post are closed

Community moderators have prevented the ability to post new answers.

Post a new question

Community showcase
Published in Jira

[Survey] Development tools and DevOps - share your thoughts!

Hey Community! I work at Atlassian in product marketing and we are conducting a survey to better understand how people feel about their development tool chain and DevOps. We would love if you could...

42 views 0 0
Read article

Community Events

Connect with like-minded Atlassian users at free events near you!

Find an event

Connect with like-minded Atlassian users at free events near you!

Unfortunately there are no Community Events near you at the moment.

Host an event

You're one step closer to meeting fellow Atlassian users at your local event. Learn more about Community Events

Events near you