Come for the products,
stay for the community

The Atlassian Community can help you and your team get more value out of Atlassian products and practices.

Atlassian Community about banner
Community Members
Community Events
Community Groups

FishEye slow and killing Crowd: Java heap space


I'm trying to use FishEye-2.6.1 and Crowd-2.2.2. But FishEye is acting painfuly slow.

Crowd's logs endlessly outputs this:

Jul 6, 2011 11:53:44 AM com.sun.jersey.server.impl.application.WebApplicationImpl onException
SEVERE: Internal server error
java.lang.OutOfMemoryError: Java heap space

apache-tomcat/bin/ contains this:

JAVA_OPTS="-Xms2048m -Xmx3500m -XX:MaxPermSize=512m -Dfile.encoding=UTF-8 $JAVA_OPTS"

Server has 4GB of RAM, shows 2200M free.

I don't know what to do to get it working.

Edit: The problem happens when I try to use FishEye-2.6.1

FishEye logs endlessly drops this exception:

2011-07-06 14:15:47,126 WARN - Problem communicating with Crowd
com.atlassian.crowd.exception.OperationFailedException: Java heap space
at com.cenqua.fisheye.user.crowd.CrowdAuth$
at com.cenqua.fisheye.user.crowd.CrowdAuth$

2011-07-06 16:51:23,582 WARN [btpool0-4 ] CrowdAuth-getGroupsForUser - Problem communicating with Crowd
com.atlassian.crowd.exception.OperationFailedException: Java heap space
at com.cenqua.fisheye.user.crowd.CrowdAuth$

6 answers

1 accepted

1 vote
Answer accepted

I found out the problem with the help of the Atlassian Support. Thanks to these great people: Rene, Zed, Ajay and Renan.

I think my problem is related to the huge amount of groups and people we have.

The resolution is very simple:

  • The clock must be synchronized. Using the same NTP server is mandatory. If the time difference shifts more than 200ms, problem will appear between applications
  • The Crowd cache must be enabled otherwise it won't be able to handle all the groups/users
0 votes
David Yu Rising Star Jul 05, 2011

Maybe it's a performance bug? You ought to contact Support and see if they can help you narrow down the issue based on the error you're posting.

0 votes

Ok, I assume that your system has free memory available. Use jmap to get the heap snapshots:

jmap -heap <pid>

jmap -permstat <pid>

jmap -histo <pid>

jmap -dump:format=b,file=myheapdump <pid>

You may use HAT from IBM to see where's the problem ...

If you are experiencing native OOM and not heap OOM it will be very difficult to detect. If it's a heap problem, you will see above.

0 votes

First of all, you should not allocate from the first 2048 Mb RAM for a java process. If you run something else on that machine, you may end up out of virtual physical memory (you only have 4Gb RAM + swap). In this case, the java process will refuse to allocate one of the segments in the range [2048-3500].

Next, unless instructed by Atlassian, the permsize might be a bit too high, and I would reduce it.

Next, if the problem persists, you should dump the heap from the java process,and analyze it to see what class is allocating so much memory. If you are not an expert, zip it and send it to atlassian for analysis.

It's a dedicated server server. Only Crowd on this one.

Thanks for the permsize tip, was not aware of it.

and a comment:

"painfully slow" may be due to the fact that your java heap is actually on swap right now ...

Server is not swapping.

0 votes
You can also check the right memory settings by checking system information. It's urn admin - crowd/console/secure/admin/systeminfo.action

I see no problem:

JVM Statistics
Total Memory: 3497 MB
Used Memory: 1090 MB
Free Memory: 2407 MB

If you check the process, you can see how much memory was allocated at startup - just to check whether the above settings were picked up or not! You can find it in the begining of the startup logs as well.

ps -Af | grep java should list the processes with details in linux.

Suggest an answer

Log in or Sign up to answer

Atlassian Community Events