These have been intense years working on Atlassian Environment and getting to know a product that evolves every single year to make itself better. It is a product that actually works so it is worth letting people on the business to try it. Once the teams decide to turn into the Jiraism the questions start to rise:t
“What are we going to do with our current data?”
“I don’t want to lose my 10 year old important data”,
“Can we integrate our system?"
– To work half day on an old system that took me forever to create one record, and half day in Jira.
It is necessary to let people feel that their precious data will be there after it goes to a better place…
Then it is essential to look at the system to assess how will the migration take place and find how people are actually using it:
And there's where the nightmare starts…”Oh my god! How are we going to migrate all this data to Jira…”
So, having said that…Here they are some recommendations and best practices to make a migration and don’t die trying:
Most of the time the out of the box tools offered by the JIM (Jira Importing Migration Tool) fall short with the customer’s requirements because they want to customize some data such as: Instead of creating one project by product (In the case of Bugzilla) Identify if the product has the component X and assign it to the Project Y in Jira and so on.
Sometimes the level of customization is such that you cannot use the out of the box tools as they are, you need to build your own file and I strongly recommend to use JSON or CSV directly from the database. It worth it, spend the most time of the project building your CSV and JSON data structure.
If your source data is in PostgreSQL you can handle it to build a very strong file without pass your information to the data processors issues.
Of course if you have the patience to build your JSON file thru database or any other method, you can do it, there are not right ways to do it, the only right way is to no matter what, you need to migrate that info to Jira!!
CSV is easier to manage and looks like a spreadsheet at the end of the day; but even the heroes have limits, the heel of this Aquiles is the History field, you cannot import this field with CSV, so one workaround for this task is to use JSON once you have migrated the information a have a Jira ID for the Issues.
An example of the final JSON file to update one record on History could be:
{ "projects":[{"name":"PROJECT NAME","key":"KEY","issues": [{"key":" KEY -4","history":[{"author":"messi","created": "2018-09-04T01:55:31.000+0200","items":[{"fieldType" : "custom","field" : "CC","fromString" : "NULL","toString" : "neymar@fcb.com"}]}]}]}]}
Of course as it always happens on school classes, the example seems like a really kind pony and when you see it on the real exam it is a huge monster of 100 heads, but you know…you have the idea…
Data processors such as Excel or OpenOffice are always limited:
In some cases, when you have a lot of information and cannot keep the microscope top to your data all the time, these processors don’t even warn you that data was trim to fit, and you just realized it was when you made the validation.
Sometimes there is naughty data that you can't notice until it is jumping with your production data.
Recommendation: Build your file (CSV or JSON) directly from the database.
One of the most common questions that I always heard is
“How can I Build my query if there are more than 1 column for each field?”
Such as: Comments, Attachments, Flags.
If you are using PostgreSQL you select the max of that field and add to your SELECT query as an array.
select max(array_length(regexp_split_to_array(csv, ','), 1)) from your_table
select a[1], a[2], a[3], a[4], a[5], a[6]
from (
select regexp_split_to_array(csv, ',')
from your_table
) as dt(a)
It will require some manual stuff but believe me, it will save you a lot of time later.
End users are like Katy Perry song: “They change their minds like a girl changes clothes” and that could be a really problem if you don’t follow the cycle of Requirement changes, go back to the analysis and design phase. Put on the scale the pros and cons of changing requirements, assess them with the customer and reschedule the plan.
Emojis not always are happy ( )
If you customer stores code data, ensure that the team understands the impact of using tokens on Comments and Description fields. Assess if the project will be required to use the Non-Format Render, or Transform the original code tokens to be accepted and show them as they are on the Wiki Render. For example the code \-\-\-Press next to continue\-\-\- will be shown as ---Press next to continue---. But what the team really wants is to see it as it is, so there are some workaround for this:
Change the render to Non-Format: This is the last option because in some point every customer wants to write a smile face after all ☺, so assess with the team and list pros and cons.
Encourage end users (Stakeholders, Team Leader, 5 team members) to be forceful with their test and point out ANY single issue they notice, always it’s better to spend more time on the testing phase than turning on candles to all the Saints to nothing happens on Production.
Before a migration be sure your database and Jira has been backed up.
Don’t forget to backup everything before run this method.
If required research with Add-on vendor to assess if possible to migrate information related to Add-ons such as Component and Sub-components.
Run the Statistic Queries after migration is completed and compare the numbers, if they don’t match what you did restore immediately.
After migration always will be errors or better ways to do it, as the people says:
“How many computer programmers are there needed to screw in a lightbulb? 1 and 99 to say they could've done it better...”
Of course you can always trust on an experienced team such as DNFCS Inc. to do the dirty work for you…
Daniel Alonso
Senior Atlassian Engineer
Trundl Inc.
North Carolina
1 accepted answer
2 comments