Brand new JIRA/Greenhopper installation very slow/high CPU

We've just bought and installed JIRA 5.0.3 & Greenhopper 5.10.1. The server is installed on an Amazon EC2 server ("m1.large", Amazon Linux 64bit, 7.5GB memory, 2 cores), connected to an Amazon RDS instance (db.m1.small, 5GB storage). The system is connected to our internal Active Directory system for authentication. The application server has the Tempo Time tracking plugin (v7.1.1.1) installed. There are no other applications running on this server.

Our problem is the server is pegging the CPU at 100% usage, all the time, with only 1 or 2 users logged in. As such screen updates are taking 20-30 seconds to display, even with 1 or 2 users. We've modified the memory params to the settings below.

Can anyone give me some pointers on how to begin troubleshooting where the CPU load is coming from?

Thanks in advance,

Dave Riches.

-Djava.util.logging.config.file=/opt/apps/jira/conf/logging.properties -XX:MaxPermSize=512m -Xms4096M -Xmx4096M -Djava.awt.headless=true -Datlassian.standalone=JIRA -Dorg.apache.jasper.runtime.BodyContentImpl.LIMIT_BUFFER=true -Dmail.mime.decodeparameters=true -XX:+PrintGCDateStamps -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager -Djava.endorsed.dirs=/opt/apps/jira/endorsed -Dcatalina.base=/opt/apps/jira -Dcatalina.home=/opt/apps/jira -Djava.io.tmpdir=/opt/apps/jira/temp

8 answers

I found the problem.

The server JIRA/Greenhopper ran in was an Amazon m1.large instance (2 CPU's, 7.5GB memory). Monitoring the server the CPU was showing 100% usage. Closer inspection showed the CPU 'Steal Time' was > 97%. This parameter is apparently used in Virtual Machines - Amazon use it to slow down/claim back excessive resource usage in an attempt to ensure equal performance to all instances running on their hardware. Switching the machine to a more 'Compute-friendly' 'c1.medium' VM has eliminated the 'Steal Time' usage.

http://gregsramblings.com/2011/02/07/amazon-ec2-micro-instance-cpu-steal/ explains 'CPU Steal time' better than I can.

kill -3 <pid> => generate the thread dump

Post that file

Here's the file (thread.txt)

Here you go

Was this taken while 100% CPU usage?

If yes, the problem most probably is with:

1. The LDAP connection thread

2. The acceptor thread (hard to believe)

To be sure, do the following while having 100% usage:

a. Start top utility

b. Once started, press Shift-H (upper H). It will show you the threads consuming the most CPU.

c. Note down the pid (the tid) of the thread, usually first column

d)Dump again the threads.

e) Post again the TID and the thread dump

I ran top -H; there were multiple JIRA threads consuming 20% or more CPU. I ran kill -3 8141 7941 7946 7949 8143 7943. I've attached the log file after running kill -3 on the processes/threads (thread.txt)

Thanks,

Dave

I'm afraid this is for Atl support; I'm unable to help you further. In this thread dump, the threads you mentioned (0x1f05, 0x1f0a, 0x1f0d) may show a problem with the Felix framework.

Thanks very much for taking a look. I've created a support ticket.

Dave.

Suggest an answer

Log in or Sign up to answer
Atlassian Community Anniversary

Happy Anniversary, Atlassian Community!

This community is celebrating its one-year anniversary and Atlassian co-founder Mike Cannon-Brookes has all the feels.

Read more
Community showcase
Julia Dillon
Posted Tuesday in Jira

Tell us how your team runs on Jira!

Hey Atlassian Community! Today we are launching a bunch of customer stories about the amazing work teams, like Dropbox and Twilio, are doing with Jira. You can check out the stories here. The thi...

504 views 1 18
Join discussion

Atlassian User Groups

Connect with like-minded Atlassian users at free events near you!

Find a group

Connect with like-minded Atlassian users at free events near you!

Find my local user group

Unfortunately there are no AUG chapters near you at the moment.

Start an AUG

You're one step closer to meeting fellow Atlassian users at your local meet up. Learn more about AUGs

Groups near you