PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 15288 jira 20 0 6939m 2.6g 9.8m S 187.2 11.1 12790:15 java
cat /proc/meminfo |grep MemTotal
MemTotal: 24593164 kB
this is really a new experience. i'Ve ran into this just last week. only a restart was helpful.
jira will not send any mails until it got restared..see mail queue steady increasing..
happens since upgrade to 5.2.9
any hints on this?
Hmm, how long has your server been running? If my assumption is correct based on the TIME+, it's been running for 12790 hours?
If that's the case, I suspect that this could be related to the Leap Second Bug. Could you try restarting your server and see if the problem persists?
Hmm, how about the server as a whole, then? When was the last time it's been restarted? (Not just the JIRA instance, but the entire server).
However, if the problem still persists, then we might need further troubleshooting, and I would recommend that you raise a support ticket for that
it is definitely not running 12790 hrs...
that's so strange
i just had the same problem last friday. after restarting Jira it was ok...until today
i've added xml export & support.zip to https://support.atlassian.com/browse/JSP-156417
|Uptime||5 days, 1 hour, 57 minutes, 6 seconds|
however flushing the mailqueue manually works...until the next mails end up queued.
i've increased the
after some more detailed research here on Answers and Google i tried to nail down the problem by disabling plugins that are not shipped with jira itself...
first try was to disable JEMH
23497 jira 20 0 6922m 1.6g 20m S 0.7 6.6 38:45.78 java
no more problems jvm problems and mails are being sent corretly with a delay of 1 (defauls) minute.
after re-enabling JEMH the situation didn't change and Jira kept working as usual -> GREAT
just in case any other customer is facing this behaviour when having JEMH installed...just try disabling it and see how CPU usage decreases instantly...re-enable it and be happy.
i got no idea what went wrong behind this...just a lucky fluke hitting the cause at first try
ps: i already escalated this incident to andy. maybe he can figure out sth
Hi all, just to chime in on the JEMH reference;
As Regexps are used heavily, it is possible that a scenario has arisen whereby a regexp could appear to hang, but is more symptomic of a very complex regexp that is running through all permutations. The larger the content, the more permutations, the longer the time required. For simple emails this is not even a possibility.
Large attachments need to be pulled from the JavaMail object (which is itself in memory), JEMH extracts these into new Objects, effectively doubling the size. I've heard a metric that a 10MB email needs 30MB of heap to process. Add mulitple mailboxes, and concurrent scenarios could see an excessive heap usage. If you machine doesnt have the physical RAM, swap thrashing may enuse.
Every email that is received is first written to disk as part of the audit history cycle. Purging audit history has gone throuh several iterations in response to user feedback. Originally, a 00:00 purge was triggered, but due to AO not being able to do a delete of a database object without downloading and materialising it within the JVM. Normally (ha) this is fine, even a few hundred emails dont take that long. When it gets to thousands this can take some time. Early verisons of the purge feature tried to do this as fast as possible, with multiple threads. This resulted in too much load, even if the overall time was lower. An evolution was to enforce a single threaded behaviour, which only loaded a single core, theory goes that if you have that much email, you have more than one core so JIRA should not be adversely affected.
Current releases of JEMH use an AO feature available with JIRA 5.2 to remote delete audit history records without serialising them in the JVM, this is so much faster.
Additionally, JEMH can be configured to auto-delete successfully processed traffic, further reduce the need and quantity to purge, also makes finding problem emails easier.
Oh, 10chars left:(
So, whilst purging audit history was a problem in exceptionall high load conditions, it really should not be so evident in current releases. I'd welcome feedback in support or not of that.
You can confirm your volume of audit history objects via the plugin storage view in JIRA: Advanced > Plugin Data Storage. The audit history table is AO_78C957_AUDITEVENTS.
If there is concern over JEMH processing, please setup a dedicated logger for JEMH, so that only JEMH traffic is contained, as per this page.
Yea, there was an IMG moment around 1.0.6 whereby the key in the plugin didnt match marketplace, which ouldn't let me upload the same thing so it had to switch to the correct key. Removing plugins doesnt purge the database, assuming you have a working configuration in 1.3.x you can drop the earlier set of tables.
Connect with like-minded Atlassian users at free events near you!Find a group
Connect with like-minded Atlassian users at free events near you!
Unfortunately there are no AUG chapters near you at the moment.Start an AUG
We're bringing product updates and pro tips on teamwork to ten cities around the world.Save your spot