How to delete a Jira project from database with all issues?

Do anyone have a SQL to safely delete a project with all issues?

Using delete project with 50k+ issues using http takes forever.

3 answers

1 accepted

3 votes

This varies by version of Jira, so we can't give you the raw SQL.

I'd also consider alternatives to SQL

  • Frankly, leaving the UI to churn through is a good option - safe, supported and simple
  • I wrote a SOAP script that pulled a filter and issued mass issue deletes in batches. Throttled so it deleted one issue a second or so, so you could leave it running for a few hours. Nowadays, I'd do that with REST or the Atlassian CLI instead
  • I have deleted via SQL, but it's not pretty

However you choose to do it, keep it simple. Don't try to delete the project - just hit the issues. When you've scrubbed most (or all) of them, then use the UI to delete the (near) empty project.

To delete via SQL, you'll need to examine the full database structure for your version of Jira. To get you started though, you'll be using jiraissue as the driver, and the id off there to delete from

  • changegroup (which you'll need to join to changeitem too) - for the history,
  • jiraaction - for comments
  • worklog - for worklogs
  • customfieldvalue - for custom fields
  • nodeassociation - for versions and components (and I think votes and watchers?)

There are probably more tables you'll need to scan and delete from, I only know the core ones off the top of my head. I don't know about Greenhopper at all either.

Also, if you're going to do it with SQL, you *must* follow this process:

  • Stop Jira
  • Get a backup of the database
  • Prove the backup works
  • Run your SQL
  • Start Jira
  • Re-index it

For a large Jira (e.g. 50k+ issues in a project), you'll find that leaves you with very significant downtime. Make sure you plan that in, and give yourself plenty of reserves.

Using http and API takes few hours to complete.

I am running on mysql 5.5 and Jira v5.2.4#845-sha1:c9f4cc4

I'm not that familiar with the structure of the database in 5.2, but I don't think it's changed much. The list of core tables you'll need to amend is right.

More importantly, you need to understand that although "using http and API takes a few hours", doing it via SQL is probably going to take longer. A lot longer. The execution of the SQL will be a lot faster. But there's no way you should do it without a proven backup. There's no option about keeping Jira running while you run the SQL - you MUST have it offline. You MUST re-index afterwards which takes a while. It's going to take you a while to work out the exact SQL you need to run too.

Offline jira is not a problem.

The thing what I am tring to do is to create a testing Jira environment on demand in cloud by cloning jira database and rsyncing Jira home from the production env.

Spawning cloud machine takes few minutes, however out of 100k+ issues instance I want to leave DEMO project + configuration. This process takes around 3h to complete.

I am looking something to speed-up process.

Ok, so that means you can dump the backup too (if it goes wrong, you can re-clone).

You are not going to get around the fact that the SQL is complex - you need a whole swathe of deletes to get there, and although this is going to run faster than clicking "delete project", you still need to reindex afterwards. It's still going to take some time to do all of this, but I'd not like to guess whether it'll be faster to SQL or delete in the UI.

Yes indeed.

Thank you for help.

I need to export two projects from our JIRA instance to another JIRA server, the plan is first import the JIRA backup from production server to a sandbox server, delete all other projects except the two, then export it again.  here is what I did:

 

  1. delete all unneeded project from script console ( need to install script plugin, it's free )  by script:

 

import com.atlassian.jira.component.ComponentAccessor

import com.atlassian.jira.ComponentManager


def projectManager = ComponentManager.getInstance().getProjectManager()
def result="";

new File('/home/jira/scripts/delete_projects/unneeded_projects').eachLine { projectname ->

p = projectManager.getProjectObjByKey ( projectname );
result += "Deleting Project: $p.key id $p.id <br>\n";
projectManager.removeProject( p);

}


return result;

 

2. the above code only deleted JIRA projects, but all issues and other records are still in database. Then I shut down sandbox JIRA and run

delete from jiraissue where project not in ( select id from project );
delete from changegroup where issueid not in ( select id from jiraissue);
delete from jiraaction where issueid not in ( select id from jiraissue);
delete from worklog where issueid not in ( select id from jiraissue);
delete from customfieldvalue where issue not in ( select id from jiraissue);

This removed most data.

 

3. restart Jira, reindex, then do integrity check, this will fix some left data problem.

 

 

 

You've missed several tables. Please, go back to your backup, and use the API to do the deletes. Messing around in the database when you don't know what you are doing is NOT a clever thing to do.

Well, I should put a disclaimer: This worked perfect for my task, but might not work for you.

I only deleted those records on the sandbox, the "data integrity check" step will find some orphaned records and remove them. Then I exported it to xml again, and used JIRA 's "Project import" function to import the two projects into another instance.  "Project import" will also do data integrity check and it will not load orphaned data.  Since the projects are working fine on new server now, I consider the job is done.

 

Deleting projects one by one  from Web UI or API is not acceptable,  cause there are more than 100 projects and 300,000 issues, it will take week to delete them.

 

 

 

 

 

 

 

The data integrity check will NOT find orphaned records and even if you think "it's all working now", that could easily be an incorrect assumption. You might be lucky. You might not. I'd keep your backups from before you botched this, you may need them.

data integrity check will find orphanes data, not all, but some. also Jira's own "project import" function should be able to filter orphaned data. Keeping backup is definitely a good suggestion.

I have tried. It's not any faster.

Suggest an answer

Log in or Join to answer
Community showcase
Sarah Schuster
Posted Jan 29, 2018 in Jira

What are common themes you've seen across successful & failed Jira Software implementations?

Hey everyone! My name is Sarah Schuster, and I'm a Customer Success Manager in Atlassian specializing in Jira Software Cloud. Over the next few weeks I will be posting discussion topics (8 total) to ...

3,203 views 13 19
Join discussion

Atlassian User Groups

Connect with like-minded Atlassian users at free events near you!

Find a group

Connect with like-minded Atlassian users at free events near you!

Find my local user group

Unfortunately there are no AUG chapters near you at the moment.

Start an AUG

You're one step closer to meeting fellow Atlassian users at your local meet up. Learn more about AUGs

Groups near you
Atlassian Team Tour

Join us on the Team Tour

We're bringing product updates and pro tips on teamwork to ten cities around the world.

Save your spot