It's not the same without you

Join the community to find out what other Atlassian users are discussing, debating and creating.

Atlassian Community Hero Image Collage

Import hierarchical data to JIRA w/JSON

Am I to believe that the one web page, entitled "Importing data from JSON", is the entirety of documentation for importing to JIRA via JSON?   Really?


5 answers

1 accepted

Comments for this post are closed

Community moderators have prevented the ability to post new answers.

Post a new question

0 votes
Answer accepted

@James Mason we imported >85k issues to Jira Cloud via JSON last year from another old system (BugZilla), complete with custom data, histories and issue linkages. Agree with your doc concerns, there was a lot of trial and error involved in the initial stages. Thankfully it was from a flat hierarchy but many of your other points are familiar. 

It was a fairly intensive exercise in data manipulation but did ultimately bring everything across we needed without data loss. 

I note the points above about duplicate custom field creation and users. We worked around this by making the import a multi-part operation, creating all the users and custom fields before any issues were moved, and then reading the custom field ids back from Jira via the API as a precursor to each data load. We were batching issues into various Jira projects and ran at least forty separate data imports (max 3,500 per batch) for reasons I'm sure you're familiar with regarding total JSON size and potential import failure.  

We still have the workspaces we used in our ETL tool (FME) to migrate data between the two systems, and did a webcast on it here if it should prove useful: 

Broadly speaking, it went something like this:

1. Create all users, custom fields and certain fixed reference data (teams, etc) as a one-time operation

2. Importer reads back all custom field IDs used in the import for mapping

3. Form and import JSON for bug data using these mappings

4. Read back Jira issue IDs and drop them back into the source system (BugZilla) as breadcrumbs for migrated issues (we had parallel running for several months)

5. Write relationship data into Jira for the imported issues once we have references from both old and new systems (we also updated old links inside comments to refer to the new Jira issues). This was done after issue import for reasons of synchronicity (we needed both ends of the relationship present before link creation)

6. Write additional linkage data asynchronously post import (including attachments and Salesforce linkages through third party tools)

Good luck, if you would like to see any of this in more detail please get in touch! 


I've simply been trying to add detail information/discoveries as I come across them.  Hoping to save someone who follows a bit of the trial and error.

I have taken swings at loading in projects and loading as a rather giant entity.  I expect to eventually split the user load and the project/issue load as you did - but I haven't reached the end of the rainbow as yet.

Like Iain McCarthy likes this
5 votes
Andy Heinzer Atlassian Team Nov 15, 2018

Are you on Jira Server or Jira Cloud?  We do have slightly different versions of the same document for each platform:

Jira Cloud: Importing data from JSON

Jira Server: Importing data from JSON


Could you explain in some more detail what else you are looking for here?   I gather you want to import some data into Jira, I'm just curious to learn the source of that data to see if perhaps there might exist other alternatives to importing such data.

We're not supporting our own server, so I suppose that means we're going to be on the cloud.  I suppose I had seen that there were two forms of the document - and while a simple example is highly valuable - I would have expected something more like a grammar to help me generate acceptable JSON.

We're trying to migrate from GNATS.  The data extracted from there has a modest hierarchy - where issues are allowed to have an arbitrary length list of audit trail events.  Events being "State Changed", "Responsible Changed", "Comment Added", "FixForReleaseChanged", "Originator Changed", and some a couple types that relate change in custom issue fields.

I didn't think CSV would be as convenient a way to input hierarchical data - if it's even possible.

I have a python script that successfully loads the GNATS data base, as well as representing the attached list of audit trail items.  I was looking for something to help me emit all that in an acceptable JSON form.

Like # people like this
Andy Heinzer Atlassian Team Nov 19, 2018

I have not found much in the way of other requests to migrate from GNATS to Jira.  However from the way you have described this hierarchical nature, it sounds much like the way Jira would represent an issue's history.

If you then look at a particular issue in Jira, under the History tab you can see the changes that happened to the issue over time, who made changes, and to which fields, like so:


This is something that you can define in JSON that Jira could handle.  The guide does cite at least one example of this in

                    "history" : [
                            "author" : "alice",
                            "created": "2012-08-31T15:59:02.161+0100",
                            "items": [
                                    "fieldType" : "jira",
                                    "field" : "status",
                                    "from" : "1",
                                    "fromString" : "Open",
                                    "to" : "5",
                                    "toString" : "Resolved"

In this example, it's a single historical change made by user Alice, at that specific time where the status is being changed from open to resolved.

But I would agree that using the CSV importer in Jira does not really appear to offer the same level of data that could be imported as JSON.   I don't believe that the CSV import in Jira is really intended to import such historical changes on an issue.


Does this help?  Forgive me if I have misunderstood the hierarchy you are referring to in GNATs is not historical information.  I admit that I have not used this particular system before.

Like # people like this

Your understanding of my thinking and why I'm interested in JSON is essentially correct.  But please do not focus on the fact that I'm coming from GNATS.   As I noted previously - I've solved the problem of extracting GNATS data.  I can turn what I have into a structured file of almost any sort rather easily.

I need specific information on the JSON format for import to JIRA.

Interestingly, my question appears to have been asked in May of 2013 - "JSON Format for JIRA Import".   The Atlassian team member responding at that time explained that only an example was offered because things were incomplete/preliminary/etc.  I'll quote the frustrated user from May of '13:

I can't emphasize enough that the documentation -- including the data schema -- needs to be better.


Are you really expecting customers to successfully build JSON import data for JIRA - on the basis of the cloud/server documentation pages noted above?

Like # people like this
Andy Heinzer Atlassian Team Nov 27, 2018

The short answer is, Yes.   I expect it is possible to build this import format with the examples in those documents.


The much longer answer is, I don't expect this to be quick and easy to achieve as other import methods that use predefined templates to migrate issue data from other trackers.  I would not be surprised to find this would take a lot of time to build the import format correctly.   However if you find there are problems with formatting this, that is what support is here for, either here on Community, or via our portal page. 

This doesn't need to be hard.   But it probably will be - precisely because Atlassian has not provided the detailed reference that someone creating this content needs.

Mind you - I'm not talking about a reference for specifics of "general" JSON (what a string looks like, a number, a list, a dictionary, etc.).  I'm writing python code and I'm emitting JSON using Python's "json" extension module.  So the "well formed JSON" part is easy.

The problem is that Atlassian hasn't provided specific semantics on THEIR side of what field names in what structures.   What values they're allowed/required to have exactly.   What values they'll get if not specified - or if incorrectly specified.   Etc., etc.

Like # people like this

This is exactly the struggle I am facing right now.

The example in the JSON import article is good overall, but it is a bit lacking on some of the finer points.  Yet these details are the things required for successful import.

For example, the links->name values...I found another doc that said they have to match the Names used in the Issue Linking admin page.  Well, that only defines 4 of them, which does not include the sub-task-link used in the example!

Okay, so for sub task links, maybe it's the Sub-tasks admin page names?  Nope.  Doesn't match that one either.  So where did sub-task-link even come from?

On to the ->issues->history fields, fromString and toString are pretty self explanatory, but then there is this from and to as well, which seem to have some magic numbers in them.  Huh?



Yeah, I know, it says contact Atlassian suppor for help.  Well, I am just an evaluation license, no support included.  And I did contact their "product specialist" that contacted me during the evaluation (well, I think it was just an automated message, really) and got no response.

I really want to switch away from my old system with quite a lot of data (9 years of usage).  However, if I can't import everything, it's simply not worth it (money, user frustrations, and lack of historical data) to change.

It seems that Atlassian does not grasp just how much a good data import process relates to acquiring new customers, and thus doesn't give it the attention it deserves.




Update (Jan 14): FWIW I am now communicating with someone in support and they are being helpful towards my JSON import issues.

Like # people like this

Still no success at this effort.  But I think I've learned a few new details that aren't part of any explicit documentation for creating JSON input:

  • Projects must specify a "type" - in my case and probably many others "software" is the right answer.
  • Issues are required to have a defined "summary".  A "description" field apparently won't cut it.
  • Project keys are required to be upper case only
  • I'm assuming that your JSON has to name an existing project, but not as sure of that.
  • I'm also assuming that your issue "itemType" has to be among the default values - such as "Bug", "Improvement" or "Documentation".  Not sure if this is a requirement, but it probably wouldn't be good if it accepted random issue types on a load.  Maybe you could use others if they were defined with the project.

Hoping for support on this from Atlassian.  Got an initial response from Gabriel Senna, who helpfully noted that I should consult the documentation that is the subject of this entire thread.  Perhaps that would have been more understandable, had I not observed in the support request that, "For the record, I am the author of....".

Further discovery - contrary to what I said above - no, your JSON doesn't have to name an existing project.  A project will be rolled up as needed.

Further discovery - it is very easy to overlook a tiny bit of JSON structure.   And if you do - the Jira JSON parser pretty much falls on the floor without telling you much that's useful.   The "items" field in each history item is a LIST of one dictionary.   Enter it as a single dictionary and your history will not be understood.

Further discovery - when creating users - Jira insists that they have an e-mail address.  Apparently, even if they're marked as inactive (as would be the case for former employees who no longer have a valid e-mail address).  Further, there appears to be some scanning going on for the content of the email string - such that "" addresses cause the associated user entry to be rejected.   Finally, to my embarrassment, even though I'm working in a test environment that will never go live - when I used a "real" user with a "real" e-mail address - Jira automatically sent them an invitation on the basis of the JSON upload.

Further discovery - perhaps obvious to Jira users and specific to the GNATS origin.  GNATS has a single audit trail history - consisting of entries associated with various field changes (from, to, who) as well as a comment on why.   The sample Jira JSON uploads I've seen - and the sample interactions I've performed - treat comments separately from field value transitions.   GNATS insists on a comment for a change.   Jira can't even represent that.

So I'm thinking that I'll try to represent my commented field transitions from GNATS as both field value transitions in Jira history - and a sequence of Jira comments where I'll automatically insert a line describing the transition ahead of the GNATS change comment.   The comment list approach seems to work reasonably well and yields something very GNATS like when viewing the issue comments in Jira.  I've yet to create a Jira history list - even limited to comment free changes in issue status - that includes anything but the last transition.

Further discovery - which explains something of my misadventures with creation of users through JSON upload.   It appears that the "email" string is being used as a unique identifier.   This explains why user entries without one were rejected.

Further discovery - on at least the cloud version of the "Importing data from JSON" page - the "JSON FILE EXAMPLE".   Is not usable as presented.  You need to give the project a type.   Here's a linux diff patch:

*** 25,30 ****
--- 25,31 ----
"name": "A Sample Project",
"key": "ASM",
+ "type": "software",
"description": "JSON file description",
"versions": [

Can Atlassian validate that email is a unique identifier?

Further discovery - if you create a comment entry with an empty body - such as:

"author": "xxxxxx.yyyyyyy",
"body": "",
"created": "2005-04-19T10:59:57+0400"

You'll see a scary looking red exclamation error on the input page, reporting:

Unable to import comment
Unable to import comment com.atlassian.jira.plugins.importer.external.beans.ExternalComment@7213a8eb[]: Unable to create comment com.atlassian.jira.plugins.importer.external.beans.ExternalComment@62653e6d[]. Comment not created

Nothing on the import page, or in the detail for the import, will guide you to the location of the problem or indicate anything about what it was about the comment that caused the error.

Further discovery - while it's possible to create custom fields using the example from the "Importing data from JSON" page - the syntax shown has the side effect of REPEATEDLY CREATING the named custom fields again and again - for each load attempt (even if you delete test projects - and even if the custom field names don't change).  If you do this with a textarea custom field - your issue display will start to look weird - with various sections empty areas starting to be created and reporting out a content of "None".

If you want to test loading a single JSON file iteratively - which has custom fields of the form shown on the example page - you'll need to delete BOTH the project and the custom fields it created between tests.

If your import will eventually consist of more than one JSON file - you'll certainly prefer to handle custom fields differently than what is shown.  Instead, create custom fields as a separate interactive operation, by going to "Jira Settings"->"Issues"->"Custom Fields".  On that page you'll be able to invoke "Add custom field" as well as to see custom fields that arise as a consequence of the import syntax.

Once you have the custom fields you need on the "Issues/Custom fields" page - you'll want to look at the lines for each of your custom fields.  For each, click on "..." and hover over "Configure".  Then you'll see a URL ending with "customfieldid=<nnnn>".   You need to know the value of "<nnnn>" for each of your custom fields.

So, assuming that you had something like the example that creates custom fields on the fly:

 "customFieldValues": [
"fieldName": "How To Repeat",
"fieldType": "com.atlassian.jira.plugin.system.customfieldtypes:textarea",
"value": "\tCreate on Unix, use on Windows, or the other way\n\taround."
"fieldName": "QRB Status",
"fieldType": "com.atlassian.jira.plugin.system.customfieldtypes:textfield",
"value": "unassigned"

You would chase down the id values for the "How To Repeat" and "QRB Status" custom fields, then change the field names as follows (assuming the ids were 10150 and 10152 respectively):

 "customFieldValues": [
"fieldName": "customfield_10150",
"fieldType": "com.atlassian.jira.plugin.system.customfieldtypes:textarea",
"value": "\tCreate on Unix, use on Windows, or the other way\n\taround."
"fieldName": "customfield_10152",
"fieldType": "com.atlassian.jira.plugin.system.customfieldtypes:textfield",
"value": "unassigned"

Notice that the name is "customfield_<nnnn>", not "customfieldid_<nnnn>"!

A lot of this can be reasoned out of the question "Custom Field Name in Jira JSON" and elsewhere - but far too much is left for the user to figure out experimentally.

Further discovery - the JSON importer apparently has a bug where it drops the first item from an issue's history list (see JIRACLOUD-72402 - "Import Via JSON is not Importing the First Item from the History").

The disturbing "work around" is to double-enter the first item on the list.  This works for the handful of items I've tested by hand - but I obviously have no way to test this systematically.

Further discovery - with some information provided by the question labeled "Need help understanding externalId used in JSON import". It appears that the JSON importer treats an ordinary field set for "externalId" such as the following...

"externalId": "someValue", if a new custom variable should be created, called "External Issue ID".

As noted in the previous comment above, opening with "while it's possible to create custom fields", a non-trivial load scenario will probably not be well served by the default behavior.

Instead, the user will probably want to roll their "external ID" field interactively - the same way that they do any other custom field. Then load into it using a custom field name on the list of custom field values for the issue.

Don't bother making use of the fact that the JSON importer has baked in behavior for an issue field called "externalId".

I suppose it's worth mentioning that if you use the baked in setting of "externalId" - you'll see those values showing up in the detail log "INFO" item for "Importing issue:".   While handling such a value as an ordinary custom field will yield the slightly scary looking (but apparently harmless) "externalId='null', summary=..." line.

Further discovery - the JSON format refers to custom fields inconsistently.

When loading data into pre-defined custom fields, as noted in my second post of Jul 3, you must use the "customfield_<nnnn>" name that is generated at the time that the custom field is defined.

However, if you also want transition history for such a field on the issue history list, naming the field as "customfield_<nnnn>" (on the transition history list) in the JSON will create user facing history that displays, "customfield_<nnnn>", not the user facing name that was established when the custom field was defined!

This is an obvious bug for reasons beyond the simple bad practice of inconsistency.   If I edit the name of the user facing custom field name - that change is reflected in the display of the current issue value - but it is not able to be reflected in the history of existing issues (I have demonstrated as much in testing).  I am inclined to wonder if the history of custom field information is even able to be queried in useful ways - or if that capability is limited to pre-defined fields only.

Further discovery - avoid using the "Stop Import" button when performing a JSON import.  I used it once to stop a test import that I realized wasn't configured quite as I wanted.  The result was a hung screen that never returned and a project name/keyword that is inaccessible and unable to be removed.  I tried to re-load the named project to see if that would recover the problem but no.  Atlassian has "reproduced" the problem on their end but no progress for a week.   Fortunately - I knew enough to create an evaluation/test only cloud "site" for purposes of my import testing.

(Ultimately cleared by support a week later - but it would have definitely been simpler to let the bad load finish, blow it away, then try again).

Further discovery - the JSON user load process contains critical bugs that result in users marked inactive at import being set active such that you will be billed.

Here's a test json file:

"users": [
"active": true,
"email": "",
"name": ""
"active": false,
"email": "",
"name": ""
"active": false,
"email": "",
"name": ""

If you load that, you should get a message that indicates "Your import has created 1 users.  These users are now inactive and they cannot login to JIRA.  You can change below."   You are then given two radio buttons, one indicating to "leave them inactive" (the default) and the other "set them manually".  In my testing, I always leave this as "leave them inactive".

At the end of that process, you get an opportunity to obtain a detailed log of the import operation.  Mine contains:

2019-07-25 19:33:31,022 INFO - Import started by admin using com.atlassian.jira.plugins.importer.sample.SampleDataBean
2019-07-25 19:33:31,034 INFO - ------------------------------
2019-07-25 19:33:31,034 INFO - Importing: Users
2019-07-25 19:33:31,034 INFO - ------------------------------
2019-07-25 19:33:31,034 INFO - Only new items will be imported
2019-07-25 19:33:34,745 INFO - Imported user ( as an inactive
2019-07-25 19:33:38,624 INFO - Imported user ( as an inactive user because it was inactive in the external system
2019-07-25 19:33:41,852 INFO - Imported user ( as an inactive user because it was inactive in the external system
2019-07-25 19:33:41,856 INFO - 3 users associated with import. All of them imported as inactive, this can be changed after import in User Access step.
2019-07-25 19:33:41,856 INFO - ------------------------------
2019-07-25 19:33:41,856 INFO - Finished Importing : Users
2019-07-25 19:33:41,856 INFO - ------------------------------
2019-07-25 19:33:41,856 INFO - 3 users successfully created.
2019-07-25 19:33:41,858 INFO - Retrieving projects...
2019-07-25 19:33:41,870 INFO - ------------------------------
2019-07-25 19:33:41,870 INFO - Importing: Issues
2019-07-25 19:33:41,870 INFO - ------------------------------
2019-07-25 19:33:41,870 INFO - Only new items will be imported
2019-07-25 19:33:41,875 INFO - 0 issues successfully created
2019-07-25 19:33:41,877 INFO - ------------------------------
2019-07-25 19:33:41,877 INFO - Finished Importing : Issues
2019-07-25 19:33:41,877 INFO - ------------------------------
2019-07-25 19:33:41,880 INFO - ------------------------------
2019-07-25 19:33:41,880 INFO - Importing: Issue Links & Subtasks
2019-07-25 19:33:41,880 INFO - ------------------------------
2019-07-25 19:33:41,880 INFO - Only new items will be imported
2019-07-25 19:33:41,882 INFO - ------------------------------
2019-07-25 19:33:41,882 INFO - Finished Importing : Issue Links & Subtasks
2019-07-25 19:33:41,882 INFO - ------------------------------

So far, so good.  User "a" is shown in the file as active, while b & c were not.  At some level, the importer understood this correctly.   Moreover - it is clear that all three are inactive absent your expressed intent.

Then, go to the appropriate user management location, and perform a CSV download of ONLY active users.   Omitting irrelevant pre-existing users, you will see:

id,full_name,email,active,created,Last active in Confluence,Last active in Stride,Last active in Jira,Last active in Opsgenie,Last active in Statuspage
5d3a040b59ff750c8f3271f0,,,Yes,25 Jul 2019,Never logged in,Never logged in,Never logged in,Never logged in,Never logged in
5d3a040f4ee45b0c8fec4943,,,Yes,25 Jul 2019,Never logged in,Never logged in,Never logged in,Never logged in,Never logged in
5d3a04138a98b20c2beae09d,,,Yes,25 Jul 2019,Never logged in,Never logged in,Never logged in,Never logged in,Never logged in

Observe that each of a, b and c show active "Yes" - counter to your wishes in BOTH the JSON file (for b and c) and in the UI (for a).  Atlassian will then gladly bill you for those users.

29 Jul: This issue has now been entered as a bug - "jiracloud-72606".

Further discovery - A follow-up to my post of Jul 8, regarding the behavior of the issue object "externalId" field.  If you want to create issue to issue links - without patching up the issues after the fact using the CLI (which creates meaningless noise in your issue history) - you kind of have to figure out how to turn your load into something monolithic and live with the behavior of that field.  You'll need to combine any projects A&B, where issues in A point to B or B point to A.

This worried me - because I have a large history to work with.  Over 20 years of issues (22K), a few dozen projects interconnected, and a population of present and past employees over 300.   Would the JSON load choke out on a really large file?   Surprisingly enough - I gave it a 3.8M line file - and while it took a long time (2+ hours) - it got there (as noted previously - resist the urge to press the "stop import" button for any reason).

Jira has a rich set of specific link types with different specifically reciprocal semantics - relates, duplicates, blocks and clones.  That's kind of cute.  But the loader isn't so smart.  For the symmetric link type "relates" - if you happen to create both "A" links to "B" and "B" links to "A" in the JSON - you'll wind up with two links from A to B and two more from B to A.  You have to recognize that A -> B and B -> A are the same and suppress one of them in your JSON.  Arguably, that's a bug in the JSON loader.

Further discovery - if you're trying to upload via JSON, you're presumably doing something not altogether trivial.  Creating a number of projects and users - and performing a number of test imports in order to get the details right for your situation.

To begin with, you'll probably want to simply chop your JSON down to a subset of projects, issues and users to make the test import quick and to ease cleanup.  Clean up isn't hard in the web interface so long as the total number of projects and users isn't large.   But as you get further along you'll want to perform more realistic sized tests of projects and users.  Cleaning that up in the web interface between test iterations - can start to get odious.

You may be well served by getting access to the CLI - so that you can script delete of projects and users in your testing.

Further discovery - the maximum attachment size configured for Jira is a limit on the maximum size Json you can upload.  The error message was at least sufficiently informative - but it certainly wasn't something I was expecting.

Discovery elsewhere - if you need to import to a non-default status collection.   I spent an hour trying to figure out how to modify the default workflow scheme - which apparently isn't the approach.  Instead, you specify a different workflow for your project in the JSON.  See "How to import workflow with project" ( ).  In particular - scan down to Robert Boxall's comment of 12 Nov 2018 noting "Well, after mining jira's own jira project...".

This doesn't seem overly complicated for Atlassian to remedy - am I missing something?

If you like to know how to import issues in JIRA by using JSON, you can try to start to export it in JSON and take a look how it's setup on your version.



Enable this module to export Issue Navigator results in JSON (beta) format. Note only admin can export the data but the JSON (beta) view will be visible to all users.



Enable this module to export Issue in JSON (beta) format. Note only admin can export the data but the JSON (beta) view will be visible to all users.

Comments for this post are closed

Community moderators have prevented the ability to post new answers.

Post a new question


Community Events

Connect with like-minded Atlassian users at free events near you!

Find an event

Connect with like-minded Atlassian users at free events near you!

Unfortunately there are no Community Events near you at the moment.

Host an event

You're one step closer to meeting fellow Atlassian users at your local event. Learn more about Community Events

Events near you