Come for the products,
stay for the community

The Atlassian Community can help you and your team get more value out of Atlassian products and practices.

Atlassian Community about banner
Community Members
Community Events
Community Groups

Best techniques to clean up a big Jira instance

Ok so over time we created lots of custom fields, some where deleted, some got a context added to make issues lighter,

however one thing I noticed, and that is well known, after adding a context to a field, the issue that should not have a value anymore are still present in the database.

in one case I have a project with 60k issues but 15 fields are only used on 30k issues,

how do I cleanup all the extra unusable/unwanted rows.

my main concern is that with the number of fields (around 900) the queries on the customfieldvalue table have become slower and during peak hours it is using a lot of our database processing.

I cannot find anything related to cleaning up tables or having jira do it either.

I assume a reindex is not gonna do anything as it is related to rebuilding the lucene index from the database.

Could I iterate with groovy on the issues and just do a issue save without modifying anything to force the database to clean?


I am asking this one because customfieldvalue is the first table that is going to show in slow queries during crunch times, meaning if I can clean it, it will help the instance overall

Generally its not the amount of data in a Jira database that causes performance problems. More important is the size of the Lucene indexes, particularly the main one (issues). A full reindex should reduce the size of your Lucene issues index if you have used custom field contexts to change the fields in each issue.

I don't recommend deleting rows from customfieldvalue without a full understanding of how other tables use it


Log in or Sign up to comment