Migration Nightmare (Nightgration), migrate from Bugzilla to Jira

These have been intense years working on Atlassian Environment and getting to know a product that evolves every single year to make itself better. It is a product that actually works so it is worth letting people on the business to try it. Once the teams decide to turn into the Jiraism the questions start to rise:t

“What are we going to do with our current data?”

“I don’t want to lose my 10 year old important data”,

“Can we integrate our system?"

– To work half day on an old system that took me forever to create one record, and half day in Jira.

frank.gif

It is necessary  to let people feel that their precious data will be there after it goes to a better place…

Then it is essential to  look at the system to assess how will  the migration take place and find how people are actually using it:

  • Putting extremely code logs
  • Attachments inside the database o_O!
  • Data is in XML, the only way to get data is thru an API
  • Etc….etc…

And there's where the nightmare starts…”Oh my god! How are we going to migrate all this data to Jira…”

 

So, having said that…Here they are some  recommendations and best practices to make a migration and don’t die trying:

Create a Confluence page with your checklist of steps and phases to cover the entire process

table.png

Create a Jira Project to follow up this Epic task, log every single task to don't lose track on your activities

board.png

Build a JSON or CSV

Most of the time the out of the box tools offered by the JIM (Jira Importing Migration Tool) fall short with the customer’s requirements because they want to customize some data such as: Instead of creating one project by product (In the case of Bugzilla) Identify if the product has the component X and assign it to the Project Y in Jira and so on.

Sometimes the level of customization is such that you cannot use the out of the box tools as they are, you need to build your own file and I strongly recommend to use JSON or CSV directly from the database. It worth it, spend the most time of the project building your CSV and JSON data structure.

If your source data is in PostgreSQL you can handle it to build a very strong file without pass your information to the data processors issues.

Of course if you have the patience to build your JSON file thru database or any other method, you can do it, there are not right ways to do it, the only right way is to no matter what, you need to migrate that info to Jira!!

Know The File Limits

CSV is easier to manage and looks like a spreadsheet at the end of the day; but even the heroes have limits, the heel of this Aquiles is the History field, you cannot import this field with CSV, so one workaround for this task is to use JSON once you have migrated the information a have a Jira ID for the Issues.

An example of the final JSON file to update one record on History could be:

 

{ "projects":[{"name":"PROJECT NAME","key":"KEY","issues": [{"key":" KEY -4","history":[{"author":"messi","created": "2018-09-04T01:55:31.000+0200","items":[{"fieldType" : "custom","field" : "CC","fromString" : "NULL","toString" : "neymar@fcb.com"}]}]}]}]}

Of course as it always  happens on school classes, the example seems like a really kind pony and when you see it on the real  exam it is a huge monster of 100 heads, but you know…you have the idea…

Avoid spreadsheets

 Data processors such as Excel or OpenOffice are always limited:

  • On the characters within the cells (32,767 / 55000 aprox)
  • Encoding
  • Compatibility

In some cases, when you have a lot of information and cannot keep the microscope top to your data all the time, these processors don’t even warn you that data was trim to fit, and you just realized it was when you made the validation.

Sometimes there is naughty data that you can't notice until it is jumping with your production data.

 

Recommendation: Build your file (CSV or JSON) directly from the database.

One of the most common questions that I always heard is

“How can I Build my query if there are more than 1 column for each field?”

Such as: Comments, Attachments, Flags.

If you are using PostgreSQL you select the max of that field and add to your SELECT query as an array.

select max(array_length(regexp_split_to_array(csv, ','), 1)) from your_table
select a[1], a[2], a[3], a[4], a[5], a[6]
from (
   select regexp_split_to_array(csv, ',')
   from your_table
) as dt(a)

It will require some manual stuff but believe me, it will save you a lot of time later.

Be sure of your requirements, if they change start all the process from scratch, and test

End users are like Katy Perry song: “They change their minds like a girl changes clothes” and that could be a really problem if you don’t follow the cycle of Requirement changes, go back to the analysis and design phase. Put on the scale the pros and cons of changing requirements, assess them with the customer and reschedule the plan.

Emojis not always are happy (poop.jpg)

If you customer stores code data, ensure that the team understands the impact of using tokens on Comments and Description fields. Assess if the project will be required to use the Non-Format Render, or Transform the original code tokens to be accepted and show them as they are on the Wiki Render. For example the code \-\-\-Press next to continue\-\-\- will be shown as ---Press next to continue---. But what the team really wants is to see it as it is, so there are some workaround for this:

  • Add back slashes to the known wiki tokens (*, -, #,_,??,^,~,{{,----) so your code will see like ‘\\-\\-\\-Press next to continue\\-\\-\\-‘ But you will see it correctly on the visual.
  • Surround all text with the toke {noformat} and it will show your text like:
  • noFormatText.png

Change the render to Non-Format: This is the last option because in some point every customer wants to write a smile face after all ☺, so assess with the team and list pros and cons.

Spend more time testing, when you finish test again

Encourage end users (Stakeholders, Team Leader, 5 team members) to be forceful with their test and point out ANY single issue they notice, always it’s better to spend more time on the testing phase than turning on candles to all the Saints to nothing happens on Production.

Backup Everything

Before a migration be sure your database and Jira has been backed up.

Run Statistic Queries before migrate 

  • Issue counter query to verify how many issues are before updating
  • Run a comment counter query to verify how many comments are before create them
  • Run a query to get all issues that were update since the migration process started
  • Compare number of comments between Jira and Bugzilla
  • Look for issues with the same summary

After Migration: Update History with JSON method

Don’t forget to backup everything before run this method.

API to upload Add-ons information and Project information.

If required research with Add-on vendor to assess if possible to migrate information related to Add-ons such as Component and Sub-components.

Verification Queries

Run the Statistic Queries after migration is completed and compare the numbers, if they don’t match what you did restore immediately.

Learn from the process

After migration always will be errors or better ways to do it, as the people says:

“How many computer programmers are there needed to screw in a lightbulb? 1 and 99 to say they could've done it better...”

Of course you can always trust on an experienced team such as DNFCS Inc. to do the dirty work for you…

power-rangers-gif-03.gif

 

 

2 comments

Comment

Log in or Sign up to comment
Americo Cuauhtemoc Calzada de Luna February 15, 2019

great article, was useful.

thanks

Gonchik Tsymzhitov
Community Leader
Community Leader
Community Leaders are connectors, ambassadors, and mentors. On the online community, they serve as thought leaders, product experts, and moderators.
February 27, 2019

Thanks. 

Cheers,

Gonchik Tsymzhitov

TAGS
AUG Leaders

Atlassian Community Events