Come for the products,
stay for the community

The Atlassian Community can help you and your team get more value out of Atlassian products and practices.

Atlassian Community about banner
Community Members
Community Events
Community Groups

Is your JIRA instance growing above 200,000 issues?

Atlassian's recommendation is to split instances after growing to 200,000 issues or more. Many instances grow beyond that level. If you are running above this limit, how did you make it work? This is a place for customers exceding this limit to share tunning experiences.

11 answers

1 accepted

18 votes
Answer accepted

Performance tweaks for our 350,000 issue instance:

* Keep JDK very up-to-date
* If using MySQL, use the Percona Server variant instead
* Run MySQL on the same server as Jira
* Use fast disks in in a RAID 10 configuration with battery backed cache
* If using MySQL, upgrade to 5.5.x
* Use the "-XX:+UseParallelOldGC" switch
* Allocate a 6-8GB heap to Jira minimum
* If using MySQL, get your DBA to tune the following params:
** innodb_buffer_pool_size
** query_cache_size
** thread_cache
** table_cache
** Set the mysql tmpdir to use a memory based filesystem (tmpfs) if possible
* If running on Linux, ensure the filesystem is mounted with the "noatime" attribute in fstab
* If running on Linux, try and upgrade to a distribution with a kernal >=2.6.19 + glibc >=2.6.1. When you're running on that kernel, turn on the Java "-XX:+UseNUMA" switch if you system has AMD Hypertransport or Intel QuickPath Interconnect abilities
* Try and use a machine that has at least 10 cores. 12/16/24 cores will offer far great performance improvements than a lesser number of cores with higher clock-speeds.
* Where possible, the machine running Jira+MySQL should have 32-48GB RAM, to allow both a large heap size for Jira and the MySQL database to be fully loaded into memory
* Evaluate re-indexing time with each 3rd party plugin enabled on its own. Some plugins are indexing hogs, and you may need to work with the developer to optimize their performance when indexing.
* If installing a plugin that provides a JQL function or set of JQL functions, evaluate each function individually in a sandbox system with your production data. We've had certain 3rd party JQL functions take our system down because they were never designed with our number of Jira issues in mind
* Some books and blogs we keep an eye on:
** High Performance MySQL
** Java Performance
** MySQL Performance Blog
** MySQL @ Facebook
** Mike McCandless Blog (Lucene)
* If using the XML view of a filter in conjunction with Confluence, modify the filter to include only the information that will be displayed in Confluence, instead of requesting all of the information for every Jira. The xml header shows how this can be done:


RSS generated by JIRA (4.3.4#620-r152668) at Sun Nov 27 00:21:43 PST 2011 It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request. For example, to request only the issue key and summary add field=key&field=summary to the URL of your request. For example:</span<>>


We're currently investigating the use of a caching proxy server installed at the network edge in each of our office location to server static content (css,js,images) locally, instead of hitting the production server with unnecessary requests.

Here's our numbers

Issues 343617
Projects 74
Custom Fields 461
Workflows 60
Attachments 199853
Comments 1371820
Users 2120
Groups 378

David, what type of support staff do you have for JIRA? A JIRA admin and maybe a BA type person or two that meets with internal groups and configures basic new projects. I have a client that is curious on a basic JIRA staffing model.

I've sinced moved on to another team, but I'm still in touch with our Jira admins. They've just recently hit 1 million issues and the team maintaining our Jira instance is 3 people (who also take care of Confluence, Crucible,Fisheye, and a whole mess of Bamboo instances, any any integrations the Atlassian tools are running).

3 votes
  1. Throwing more resources at it
  2. Better housekeeping
  3. Expert tuning (both app server and Jira level knowledge - Jamie Echlin's tip about using "everyone" for "browse" instead of "jira-users" for example)
  4. Simplification (which reduces the usefulness of the system)
  5. Deleting issues, which causes a need for archiving, and that rapidly becomes a nightmare because Jira doesn't support the cross linking of separate instances (until V5?), doesn't have any useful way to transfer between instances for anything but the most simple case, and it completely breaks searching.
  6. Dropping indexing off older issues, so they still live in the database, can be referred to, but breaks all the searching again (and needs code)

What most people really want is proper clustering for load-balancing.

Java's Garbage Collection (GC) was causing long pauses in environment even using -XX:+UseParallelOldGC. Researched using a better GC Type for JIRA. I found Concurrent Mark-Sweep (CMS) Collector with incremental mode to be better tuned for web applications. For more information see

Add the following Switches to use CMS with Incremental Mode:

-XX:+UseConcMarkSweepGC -XX:+CMSIncrementalMode

This collector is useful for Web Servers which have a high rate of Young Generation GC and Mostly Stable OLD Gen and Perm Gen. This is Which takes advantage of multiple CPUs to GC while application pauses are minimized providing better application performance. Incremental mode yields time to the application during GC giving better performance to the applicaion. If you have a huge heap size > 2G this makes scense to use this method for JIRA. As a benifit my heap usage dropped considerably from 2.5 GB to about 800 MB with bursts upto 1.2 GB with 2 bulk operations running and one CSV import running 120,000 issues with subtasks.

We have 390K issues with close to 400 projects. I have spoken to the Project Managers @ atlassian here are a few points

  • Proper archiving mechanism, ability to archive issues to a backup instance however not losing the capability of seamless searching from the Production JIRA Instance. Old issues dont need to hog the production system but they still need to be searched
  • Yes Lucen on SSDs works great.
  • We run on a VM that is properly managed by the IT and Database is Oracle that is managed by the DBA team.
  • I am not sure if we need 32 to 48GB ram. All I have is 12GB
  • We have limited schemes to a great length ...notification schemes, permission schemes etc. No per project customization ... .everyone sacrifices a little bit on choices but that helps
  • We have limited custom fields < 50

The biggest issue i have faced is user filters that reloads data every 5sec or 10 sec ....especially from Confluence ...user adds a few filters on one page, sets the refresh interval to 5 secs and leaves the page open and goes for a stroll for an hour ...thats enough to kill the system ... JIRA CPU shoots to the roof. Not sure if anyone else has seen the issue

Just saw my older comment here where much on the same subject

Tarun Sapra Community Leader Nov 27, 2014

Hi Devu, Can you please share your archiving strategy? thanks

At this point we really dont do any archiving, but we plan to setup a archiving server (separate from Production server) and then move the older projects over there. Again, this is in principle and not in practice as such

0 votes
Mirek Community Leader Sep 03, 2014

Overall there could be a lot of potential performance problems that may affect JIRA (and other systems). It depends on few reasons. Having more that 200k doesn’t mean that we will start to have them. I remember time when there was an new JIRA released (something around 2011) and announcement that finally say we can have more issues than that in one single instance. That was a break point. Since then there are probably a lot bigger instances than just 200k and current versions of JIRA probably work fine even having 1 million issues. We have currently more than 300k and 50k users so we started to look into this in a little bit deeper some time ago.

Page load time was pretty slow and people started to complain about the performance. By tweaking just JVM, GC, database and Apache settings we reduced it from more than 10 sec to around 3-4 sec. That was a significant change however we decided that cannot end here.

We also removed all stale projects (from around 1000 to 700) and other not used objects. The idea was to cleanup JIRA instance as much as possible to be sure that we cannot do more on that. Removing projects released schemes and removing schemes released other objects. It was easy to delete things that was no longer used.

Please note that having also other Atlassian apps (FishEye, Confluence, ... ) may have an impact on overall user experience. Integration need to download data from other instance so waiting on that could also take some time. Try also check if someone is not using a software or a script that is generating a lot of request outside JIRA.

And of course please remember that each instance can have individual cause of problems. We all use different environments, OS, infrastructure, JIRA versions, … so having one single guide is probably not possible. However there is always a room for improvement on every instance. This mean that any tweaking tips could be very helpful.

I am personally wondering what settings are used at Atlassian since for example jira.atlassian.comhave for sure more than 200k issues now and page load time is pretty fast. Any examples from you guys? That would be really helpful for all that are struggling with some kind of performance problems.

Thanks all, I appreciate it.

Sorry, we moved the documents to another space:

0 votes

Probably because it's been revised as the landscape is changing...

Why is the scaling guide for jira restricted?

The published 200,000 issue limit was obtained a long time ago and things have changed a lot since then. We already know that the total number of issues is just one of many JIRA dimensions that can effect performance, some others include custom fields, active workflows, and simultaneous users. In the near future we plan to start testing different areas of JIRA so that we can give customers interested in large JIRA deployments more definite answers based on our performance testing to help you make decisions about scaling JIRA.

We are particularly interested in talking with customers who already have a JIRA instance with more than 200,000 issues. If you fall into this group please let us know in the comments here.

UPDATE - 25 June 2012: From JIRA 5.1 onwards we are dropping our previous guide line of splitting your instance once you are around 200,000 issues! We've always known that this is not a hard limit and now we have a great resource that will help you scale JIRA.

there is not much to add to the previous answers, maybe just to put the Lucene indexes on a SSD or even a RAM disk. In the latter case you have to add logic to your startup and shutdown scripts to load the index from persistent storage and save to persistent storage. Of course if your OS crashes you will have to rebuild the index which may take quite a time in case of a > 200000 issue instance. My wish is Atlassian would offload the search indexes to a scalable cluster as a first step towards a real cluster. Replacing Lucene by Solr would help to achieve this.

Suggest an answer

Log in or Sign up to answer

Community Events

Connect with like-minded Atlassian users at free events near you!

Find an event

Connect with like-minded Atlassian users at free events near you!

Unfortunately there are no Community Events near you at the moment.

Host an event

You're one step closer to meeting fellow Atlassian users at your local event. Learn more about Community Events

Events near you