permGen's Code Cache Memory Pool constantly at 98%

I am not sure what impact of this is. From what I read, it might cause performance issues due to the JVM compiler getting disabled. Not sure if this applies to my instance, but since it is not happening in the staging environment, I suspect a memory leak in a plugin (probably jql tricks, scriptrunner or mercurial plugin).

I have read that one could enable code cache flushing, but Atlassian's support team says it is not necessary.

My actual question is: How do I determine what is filling up the code cache?

I have tried the following:

  • jmap -permstat
    • Seems to give me valid output of the contents of permGen
    • Problem: it is not available for the JRE build 1.6.0_26-b03 my JIRA 5.1.5 instance is using
  • yourkit
    • Gives me insight about the size of the code cache, but not the contents
    • Problem: needs to hook into the process (have to change JVM parameters)

5 answers

1 accepted

1 vote
Answer accepted

Jamie has confirmed that the Groovy libraries are eating up a huge chunk of the code cache. After increasing its size on our production system, we are seeing a constant level of 68MB out of 128MB. This proves that there is no memory leak (even though we use SR Listeners and SR Scripted Fields) but it was just right at the limit of the default code cache size.

Take your heap dump and load it into the Eclpise Memory Analyzer Tool -- It should help you find your leaking classloader. You'll need at least as much memory on your analysis machine as your heap size.

Thanks. I have heap dump analysis before but I have no idea how to find the content of the code cache memory pool with a tool like mat or yit within a heap dump...

Thanks Radu, I have posted a reference to my question in there. I doubt that it is the same cause, but the problem seems to be related.

Hi Fabian,

Is this KB article regarding code cache you've read? Apparently when Java CodeCaches become full, the solution is add the following parameter to System Properties:


Also, there is a detailed information about what could cause this in this blog post.

I hope this has helped.


I have already read that blog and I am aware of the UseCodeCacheFlushing JVM parameter. However, the blog post does not explain how to find out WHAT is actually filling up the code cache. If I had a way to analyze it, I could narrow down the troublemaker (as I said, I suspect a Groovy Listener).

Any thoughts on how to tackle this?

Fabian, it's a tendious task. In this case, I would take a heap dump , navigate to all active class loaders. Then, I would look at the common classloaders and navigate up to the roots of the heap. Then, I would compute the sizes / contents of those roots at different times, different heaps. That would give you an idea on what's going on there ...

Try VisualVM to do the analysis ...(or any visual tool of your choice)

But the permGen is not part of the heap...

Use jvisualvm, turn on memory sampling, filter for classes with your package, or if a script, just "Script".


PermGen *IS* heap (and it is included in the heapdump) although the controlling parameters are separated. If I remember well, initially Java had no PermGen, and it was added as a performance impr. After some time, it gained the status you see today ...; as always, there's a philosophical argument around it, to prevent the classes to interact with the "usable" part of the heap.

Ok, so in theory I should be able to determine the contents of the code cache memory pool that is part of the permGen space which in turn is part of the heap space by taking a heap dump and analyze it with whatever tool (mat, yit, jvisualvm)? Sounds like a plan.

However, I am still not sure how to determine the objects that would be stored in the code cache. I will have a look at the output of Jamie's suggestion and report the results.

You don't need to take a heap dump... using jvisualvm is really very simple, if it takes more than 5 mins then you're probably doing something wrong. jvisualvm ships with the jdk.

@Jamie, that's the jmap -permstat output filtered for Groovy/Plugin/Script:

It's a hell lot of ScriptRunner stuff just after triggering the ScriptListener about 50-60 times.

Will check it out tonight or at w/e.

Thanks @Jamie!

Do you need me to open a ticket for you and attach my listener's code?

I also am experiencing codecache getting close to filling up. It was set to 64mg previously (something Jamie suggested way back when to deal with a different scriptrunner problem) and crept up to just about full in about 1 1/2 weeks of uptime. I then experienced a concurrentmarksweep thrashing and had to restart. Not sure if they were related?

I raised my codecache to 80mg (just delaying problem, clearly) and am next going to try turning on -XX:+UseCodeCacheFlushing or I may just upgrade to Java 7 which has that on my default. I also use scriptrunner to execute a groovy script in a transition.

Fabian, can you watch - I'm trying to keep info there to avoid duplicating.


The problem there seems to be unrelated, because he is obviously using a different Groovy version, also in my instance there are no scripted fields but a single listener instead. Shall I create another ticket for this matter?

I've looked at the pastein pastebin and I'm not sure I'm seeing any problems - nearly all of that seems to be groovy interal stuff.

You say the cache is at 98% - but if caches are operating well they should be at 100% or close. Being at 98% in of itself is not a problem. Do you eventually run out of permgen and get an OoM? Or you get a message about the compiler being disabled?

> then experienced a concurrentmarksweep thrashing and had to restart

May well not be related... not sure if the CMS GC is recommended for jira anyway.

As I said, I am not running OOM but that much groovy stuff in the code cache looks awkward to me. I don't see this as a real issue, I just wonder if maybe every time my listener gets executed a new instance is created...

A new instance or a new class? The former, probably (I can't remember), the latter, it should not, but you can check that by printing from your listener.

Groovy is quite a big library, if you use it there is space taken in permgen. I don't think there is a real problem here.

I just executed 50k views on a page with a scripted field and there was no problem. If you're concerned why don't you execute your listener 50k times and see if you get any problems?


I have used this.class.dump() and afterwards fired 3 events for the listener to catch:

To my understanding it should call the same class every time, but instead I get three different ones (java.lang.Class@11c9f6ad, java.lang.Class@29664516, java.lang.Class@4f814783). Is that correct?

I have recently run a bulk transition with 280 issues and it would have triggered the listener 280 times. However, the code cache filled up and the instance became unresponsive (no OOM however)...

OK that is a real problem... can you create a new bug with your logs, the listener you're using, version information etc etc.

Suggest an answer

Log in or Sign up to answer
Community showcase
Posted yesterday in United States

Confluence Security Advisory

Good morning Members, Not sure if you are aware. Please read the following: More details: https://co...

35 views 1 0
View post

Atlassian User Groups

Connect with like-minded Atlassian users at free events near you!

Find a group

Connect with like-minded Atlassian users at free events near you!

Find my local user group

Unfortunately there are no AUG chapters near you at the moment.

Start an AUG

You're one step closer to meeting fellow Atlassian users at your local meet up. Learn more about AUGs

Groups near you