After upgrading from JIRA 4.4.4 to 5.2.4, we noticed that the number of JIRA's open file descriptors started reaching higher numbers than we'd seen before, eventually exceeding the ulimit, which caused various problems. We've since adjusted the ulimit and are OK, but we noticed last week that the number of open descriptors climbed to about 3500, and then plateaued there for several days. Many/most of these point to index cache files that had been deleted.
We don't know if this is normal or a symptom of a problem. Why 3500? Why deleted files? Can we safely assume that JIRA or java will manage these descriptors in a way that won't exceed our ulimit settings (currently 8192)? Are there administrative practices we need to perform to avoid a problem with this?
We don't have all the details resolved, and certainly, we never again saw this "flat top" in the charts for the open file descriptions, but I'm not sure that matters. The long standing problem we experienced with unbounded growth of open file descriptors was the more serious problem.
The file descriptor leak stopped when we moved the cache directories to a local/native file system rather than an NFS mounted one. We're still digging to see exactly why we can't run the way we did before the upgrade, and it may turn out to be some interaction between specific versions of the OS, NFS, Java, and of course JIRA.
For performance reasons, we may never go back to the NFS hosted cache directories, but moving away from them certainly solved our problem.
Yes. The entire install (product and data directories) have been on NFS file systems since we first started using JIRA, because we run JIRA on muliple hosts but kept resources for all the instances on the same file system, which included the cache directories. Now the <JIRADATA>/caches directory is a sym link to a directory in /var, and the leak has stopped.
Thanks for the tip.
We're not used to being concerend about this sort of issue. We have been using NFS to host home directories and many. many critical resources to share all these across a large and diverse cluster, and have been doing so literally for decades in an intensively used developement and build environment. There have been a few NFS problems over the years, yet we have been reliably building millions of lines of code nearly continuously in this enviroment, with an extremely low problem rate (maybe one every couple of years). So we've come to take all this for granted, and the problems we've experience with JIRA's caches are the first like this. That said, JIRA is the first intensively used Java web app we've supported in this cluster, so we probably still have some things to learn.
Ah, I see.
I've tried putting the index on several file-systems. Anything other than raw, local, direct access has been an unmitigated disaster. NFS, SAN, Samba - they all seem to put such a drag on the Lucene index, Jira crawls to a halt and dies. Never got to the bottom of it, a simple test of moving the index "the fastest nastiest local hard drive you can get your paws on" made all the problems vanish.
On the other hand, all the rest of the data - absolutely fine. Plugins, home directory contents, attachments - not a problem.
Nic - Thanks for sharing. If there were recommendations along these lines in the setup documentation, we missed it, but it does make sense to take steps to keep these sorts of accesses as fast as possible. We don't see obvious changes in the perforamce or other user visible behavior since we made the change, but for sure we can now reduce the nofiles ulimit to something more reasonable, and we can stop doing maintenance restarts so often.
Thanks for sharing your experience - I've seen it hinted at several times, commented, but not really have anyone follow up to confirm that their particular storage was definitely the problem.
I was starting to think it might be just that if I was around, NFS or SAN disks would start to shred themselves for no reason other than to make my life difficult. ;-)
Can you confirm your experiences with a /tmpfs file system on Linux? We are using JIRA 5.2.11
We have put our indexes into a RAM disk but today we saw JIRA crashing with more than 7000 files open. It looks like the number of open files are gradually increasing in the range of two weeks.
The JIRA supported platform docs do stuill say "don't use NFS for JIRA home dir" because Lucene indexes don't work well with it. I'm seeing problems with a RAM disk too I think, but it may nothing at all to do with SSD and more to do with what is causing a AlreadyClosedException. Every time that happens I see a new file descriptor accessing the same Lucene index file (issues)
The Jira Marketing team is putting together an ebook on migrating to Data Center. We're looking for pro tips on how you staffed your project team and organized your Proof of Concept. Share yo...
Connect with like-minded Atlassian users at free events near you!Find a group
Connect with like-minded Atlassian users at free events near you!
Unfortunately there are no AUG chapters near you at the moment.Start an AUG
You're one step closer to meeting fellow Atlassian users at your local meet up. Learn more about AUGs