While looking for the reason why disk space has been used so fast, I found that the auditmail folders took a huge amount of disk space. Here is the sumary usage of the folders under /dstore/atlassian/application-data/jira/data/jemh/auditmail/2013/0 (total usage 153GB in size):
I assume the folder name corresponding to a date of January. For 16, 17, 28, the folder sizes are so large. Regular "ls" command failed with following error:
# ls 18/* | wc -l
-bash: /bin/ls: Argument list too long
This means there are more than hundreds of thousands of files in one directory. This WILL cause serious performance problem!
The number of files for 15 - 18:
Given the number of files in one directory, JEMH is driving the system on the edge all the time. Any further load would knock down JIRA service. For 1- 12, the directories are empty, but the size of directory itself is huge. This usage scenario is not scalable.
Currently I set JEMH to clean up audit trail history at 1AM PST. But It seemed not working since Jan 13, 2013.
Please help resolve this problem.
Thats quite a lot of email history. Do you monitor the daily audit history, what are your normal volumes? If the numbers you see are vastly outside your norm, I wonder if there is an email notification loop? Do you have a catchmail address or jemhAddresseeRegexp address? This is how inbound mail loops are stopped.
The audit history should indeed be cleared (subject to the retention range you specified), could you tell me what that is?
Regarding the 13th Jan time, has there been any JEMH update at that point?
A short term fix to resolve space issues would be to:
1. suspend mail processing by uninstalling the plugin
2. remove all the JIRA_HOME/jemh/auditmail/ files. They are only required in some scenarios just after issue creation (for non-jira user attachment detection/provision). The scheduled mop-up should remove these.
3. dealing with the database, you can also manually drop the related database tables, see https://studio.plugins.atlassian.com/wiki/display/JEMH/Common+Problems#CommonProblems-AuditHistoryistoolarge
4. reinstall the plugin, re-enabling mail processing.
5. Check your audting settings, set an initial retention period to 1day
6. Enable logging and trap content at the expiry time for review
As of this morning, this is the total messages being processed 17119245. For Jan 21 (as of 10:16AM), I have 248103 files (total 22G) in the audit directory. I believe this number will go up to 60G or more by the end of today.
We have 6 JEMH profiles, all have catchmail addresses. The auto delete is set to last 6 hours. Does that mean we have a rention of 6 hours?
Hi Simon, 17million emails, thats a lot of traffic!
Are you retaining failures? Doing this will mean mails that fail to be processed for some reason would be retained through a purge.
Yes, retention period currently has a lower limit of 6hrs, but the clean up period only fires once a day (to reduce load on the server).
Though you haven't indicated what percentage of mails are successful vs not, assuming most of them are successful, a further performance means for high load environments would be to allow disabling archiving of successfully processed emails. This would have some minor feature impact:
- Any forensic analysis of success emails would not be possible
- Test Cases cant be created
Im going to work on this now so it will be in the 1.3 release coming very soon. Any further insight into how the above will work for you would be useful.
This community is celebrating its one-year anniversary and Atlassian co-founder Mike Cannon-Brookes has all the feels.Read more
Atlas Camp is our developer event which will take place in Barcelona, Spain from the 6th -7th of September . This is a great opportunity to meet other developers and get n...
Connect with like-minded Atlassian users at free events near you!Find a group
Connect with like-minded Atlassian users at free events near you!
Unfortunately there are no AUG chapters near you at the moment.Start an AUG
You're one step closer to meeting fellow Atlassian users at your local meet up. Learn more about AUGs