Create
cancel
Showing results for 
Search instead for 
Did you mean: 
Sign up Log in

FishEye slow and killing Crowd: Java heap space

SamK July 5, 2011

Hi,

I'm trying to use FishEye-2.6.1 and Crowd-2.2.2. But FishEye is acting painfuly slow.

Crowd's logs endlessly outputs this:

Jul 6, 2011 11:53:44 AM com.sun.jersey.server.impl.application.WebApplicationImpl onException
SEVERE: Internal server error
java.lang.OutOfMemoryError: Java heap space

apache-tomcat/bin/setenv.sh contains this:

JAVA_OPTS="-Xms2048m -Xmx3500m -XX:MaxPermSize=512m -Dfile.encoding=UTF-8 $JAVA_OPTS"

Server has 4GB of RAM, shows 2200M free.

I don't know what to do to get it working.

Edit: The problem happens when I try to use FishEye-2.6.1

FishEye logs endlessly drops this exception:

2011-07-06 14:15:47,126 WARN - Problem communicating with Crowd
com.atlassian.crowd.exception.OperationFailedException: Java heap space
at com.atlassian.crowd.integration.rest.service.RestCrowdClient.handleCommonExceptions(RestCrowdClient.java:1084)
at com.atlassian.crowd.integration.rest.service.RestCrowdClient.getNamesOfGroupsForNestedUser(RestCrowdClient.java:741)
at com.cenqua.fisheye.user.crowd.CrowdAuth$8.call(CrowdAuth.java:443)
at com.cenqua.fisheye.user.crowd.CrowdAuth$8.call(CrowdAuth.java:439)
[etc...]



2011-07-06 16:51:23,582 WARN [btpool0-4 ] fisheye.app CrowdAuth-getGroupsForUser - Problem communicating with Crowd
com.atlassian.crowd.exception.OperationFailedException: Java heap space
at com.atlassian.crowd.integration.rest.service.RestCrowdClient.handleCommonExceptions(RestCrowdClient.java:1084)
at com.atlassian.crowd.integration.rest.service.RestCrowdClient.getNamesOfGroupsForNestedUser(RestCrowdClient.java:741)
at com.cenqua.fisheye.user.crowd.CrowdAuth$8.call(CrowdAuth.java:443)
[etc...]

6 answers

1 accepted

1 vote
Answer accepted
SamK July 27, 2011

I found out the problem with the help of the Atlassian Support. Thanks to these great people: Rene, Zed, Ajay and Renan.

I think my problem is related to the huge amount of groups and people we have.

The resolution is very simple:

  • The clock must be synchronized. Using the same NTP server is mandatory. If the time difference shifts more than 200ms, problem will appear between applications
  • The Crowd cache must be enabled otherwise it won't be able to handle all the groups/users
0 votes
David Yu
Rising Star
Rising Star
Rising Stars are recognized for providing high-quality answers to other users. Rising Stars receive a certificate of achievement and are on the path to becoming Community Leaders.
July 5, 2011

Maybe it's a performance bug? You ought to contact Support and see if they can help you narrow down the issue based on the error you're posting.

0 votes
Radu Dumitriu
Rising Star
Rising Star
Rising Stars are recognized for providing high-quality answers to other users. Rising Stars receive a certificate of achievement and are on the path to becoming Community Leaders.
July 5, 2011

Ok, I assume that your system has free memory available. Use jmap to get the heap snapshots:

jmap -heap <pid>

jmap -permstat <pid>

jmap -histo <pid>

jmap -dump:format=b,file=myheapdump <pid>

You may use HAT from IBM to see where's the problem ...


Radu Dumitriu
Rising Star
Rising Star
Rising Stars are recognized for providing high-quality answers to other users. Rising Stars receive a certificate of achievement and are on the path to becoming Community Leaders.
July 5, 2011

If you are experiencing native OOM and not heap OOM it will be very difficult to detect. If it's a heap problem, you will see above.

0 votes
Radu Dumitriu
Rising Star
Rising Star
Rising Stars are recognized for providing high-quality answers to other users. Rising Stars receive a certificate of achievement and are on the path to becoming Community Leaders.
July 5, 2011

First of all, you should not allocate from the first 2048 Mb RAM for a java process. If you run something else on that machine, you may end up out of virtual physical memory (you only have 4Gb RAM + swap). In this case, the java process will refuse to allocate one of the segments in the range [2048-3500].

Next, unless instructed by Atlassian, the permsize might be a bit too high, and I would reduce it.

Next, if the problem persists, you should dump the heap from the java process,and analyze it to see what class is allocating so much memory. If you are not an expert, zip it and send it to atlassian for analysis.

SamK July 5, 2011

It's a dedicated server server. Only Crowd on this one.

Thanks for the permsize tip, was not aware of it.

Radu Dumitriu
Rising Star
Rising Star
Rising Stars are recognized for providing high-quality answers to other users. Rising Stars receive a certificate of achievement and are on the path to becoming Community Leaders.
July 5, 2011

and a comment:

"painfully slow" may be due to the fact that your java heap is actually on swap right now ...

SamK July 5, 2011

Server is not swapping.

0 votes
Colin Goudie
Rising Star
Rising Star
Rising Stars are recognized for providing high-quality answers to other users. Rising Stars receive a certificate of achievement and are on the path to becoming Community Leaders.
July 5, 2011
You can also check the right memory settings by checking system information. It's urn admin - crowd/console/secure/admin/systeminfo.action
SamK July 5, 2011

I see no problem:

JVM Statistics
Total Memory: 3497 MB
Used Memory: 1090 MB
Free Memory: 2407 MB

0 votes
Jobin Kuruvilla [Adaptavist]
Rising Star
Rising Star
Rising Stars are recognized for providing high-quality answers to other users. Rising Stars receive a certificate of achievement and are on the path to becoming Community Leaders.
July 5, 2011

If you check the process, you can see how much memory was allocated at startup - just to check whether the above settings were picked up or not! You can find it in the begining of the startup logs as well.

ps -Af | grep java should list the processes with details in linux.

Suggest an answer

Log in or Sign up to answer
TAGS
AUG Leaders

Atlassian Community Events