Upgrading Greenhopper - takes ages and seems to duplicate field configurations

I am doing a major upgrade this weekend and am running out of testing time.

I am upgrading from jira 4.4.3 with GH 5.7.4 to 5.2.2 and 6.1.1. GH upgrade tasks, particularly task 13, takes at least 24 hours.

Two questions really:

1) Would you you upgrade jira separately, then install the new version of GH and let it do its tasks? Or, let the GH upgrade run immediately after jira upgrade by putting the new jar in the installed_plugins dir?

Second question is more complex.

The upgrade of GH appears to be adding tens of thousands of rows to FIELDLAYOUTITEM. Relevant bit of thread dump here:

at $Proxy487.storeEditableFieldLayout(Unknown Source)
        at com.atlassian.greenhopper.customfield.CustomFieldServiceImpl.makeFieldRequired(CustomFieldServiceImpl.java:383)
        at com.atlassian.greenhopper.service.issuelink.EpicCustomFieldServiceImpl$EpicLabelFieldFactory.applyEpicLabelFieldConfiguration(EpicCustomFieldServiceImpl.java:310)
        at com.atlassian.greenhopper.service.issuelink.EpicCustomFieldServiceImpl$EpicLabelFieldFactory.postCreate(EpicCustomFieldServiceImpl.java:298)
        at com.atlassian.greenhopper.service.issuelink.EpicCustomFieldServiceImpl$BaseFieldFactory.create(EpicCustomFieldServiceImpl.java:254)
        at com.atlassian.greenhopper.service.issuelink.EpicCustomFieldServiceImpl.createDefaultField(EpicCustomFieldServiceImpl.java:201)
        at com.atlassian.greenhopper.service.issuelink.EpicCustomFieldServiceImpl.getDefaultField(EpicCustomFieldServiceImpl.java:168)
        - locked <0x00000007ea02d5a8> (a com.atlassian.greenhopper.service.issuelink.EpicCustomFieldServiceImpl)
        at com.atlassian.greenhopper.service.issuelink.EpicCustomFieldServiceImpl.getDefaultEpicLabelField(EpicCustomFieldServiceImpl.java:75)
        at com.atlassian.greenhopper.upgrade.GhUpgradeTask026.performUpgrade(GhUpgradeTask026.java:34)
        at com.atlassian.greenhopper.upgrade.AbstractGhUpgradeTask.doUpgrade(AbstractGhUpgradeTask.java:50)

So why is it adding so many rows? com.atlassian.jira.issue.fields.layout.field.AbstractFieldLayoutManager#storeAndReturnEditableFieldLayout is synchronised and has this comment:

* THIS METHOD MUST BE SYNCHRONIZED!!!! So that only one thread updates the database at any one time. "Fields are
     * duplicated" if this method is not synchronized.

The method is synchronised but this still seems to be happening, and possibly explains why the upgrade is taking so long.

Anyone got any ideas?

4 answers

1 accepted

0 votes
Accepted answer

I think the first thing to check is if they have a large number of Rank-type fields. If so, expensive operations such as re-ranking all of those issues for each field, setting the new Rank for each field, etc, may be simply running out their natural course. This comports with the lack of errors in the logs.

Otherwise, it seems to not be unreasonable to just scan through the code and do Step 13 by hand and fake it by setting the "GreenHopper.Upgrade.Latest.Upgraded.Version" property number to 13, since it is apparently unlucky.

Also, I think the fieldlayoutitem issues involve setting up the new Rank fields on a huge number of screens.

Finally, I should mention that if you do the JIRA upgrade without GH, then backup the DB, you can redo the GH upgrade without having togo back to do the whole show.

Hi John,

Step 13 is actually a large number of different tasks, so faking it seems risky. We have 3 rank fields. I wish I could normalise that down to one before doing the upgrade but no time. Actually I might try as I think that could cut this time by two thirds.

All of the time is spent in storeAndReturnEditableFieldLayout. Most of task 13's "sub-tasks" seem to iterate through every Field config - we have 160. So it probably goes through each 480 times. Each one involves removing and re-adding a number of rows equal to the number of orderable fields in the system - ~ 1400.

So I think what is really killing us is that storeAndReturnEditableFieldLayout is not very efficient, multiplied by the massive amount of CFs that we have.

Plus oracle is just shit, so I'm seeing if I can do the whole thing on postgres and stuff it back into oracle before the weekend is out.

Thanks for your help.

This is a large instance, > 420k issues, but yeah, 24 hours is still unacceptable!

24 hrs?? :( Will get this checked tomorrow, Jamie.

Or can you raise a support request?

Thanks, I already did: GHS-6825

Clock is ticking for me here!

Suggest an answer

Log in or Sign up to answer
Community showcase
Published Feb 07, 2019 in Marketplace Apps

A Timeless Love Story

It started as any story starts, on a normal, rainy day.   Admin meets App, and her name was Klok2, and like any first relationship we were both trying to make it work but neither one knew what...

445 views 8 26
Read article

Atlassian User Groups

Connect with like-minded Atlassian users at free events near you!

Find a group

Connect with like-minded Atlassian users at free events near you!

Find my local user group

Unfortunately there are no AUG chapters near you at the moment.

Start an AUG

You're one step closer to meeting fellow Atlassian users at your local meet up. Learn more about AUGs

Groups near you