Forums

Articles
Create
cancel
Showing results for 
Search instead for 
Did you mean: 

Restrict subtask issueType options based on it's parent issue issueType

Rodrigo Silva
Contributor
April 12, 2017

Hi everyone,

I'm trying to restrict available issue types on subtask creation, based on the issue type of it's parent issue. So, for example, for the issue type "Sample", on subtask creation, I want to have only one option "Issue subtask".
So far I've implemented the following script, as an initialiser function on behaviour, and then set it to issueType field:

import static com.atlassian.jira.issue.IssueFieldConstants.ISSUE_TYPE
import com.atlassian.jira.issue.Issue
import com.atlassian.jira.component.ComponentAccessor

Issue parentIssue = underlyingIssue.parentObject

def constantsManager = ComponentAccessor.getConstantsManager()
def sampleIssueType = constantsManager.getAllIssueTypeObjects().find { it.name == "Sample" }

if(parentIssue.getIssueType().equals(sampleIssueType)){
    def issueSampleIssueType = constantsManager.getAllIssueTypeObjects().find { it.name == "Issue subtask" }

    // In other cases, I'll need multiple options

    getFieldById(ISSUE_TYPE).with {
        setFormValue(issueSampleIssueType.id)
        setReadOnly(true)
    }
}

However, it is not working as expected (all issue types are available when I create a subtask of "Sample")

Can someone please help me on that?

Thanks in advance,

3 answers

1 accepted

Suggest an answer

Log in or Sign up to answer
0 votes
Answer accepted
Joerg
Contributor
February 14, 2022

I think I might have figured out why the job searches were not working. Still testing this in detail:

I think all jobs were essentially sharing a single cached SearcherContext, which is used to search the index, and the recent version of the search method didn't have a finally block with

}finally{
ThreadLocalSearcherCache.stopAndCloseSearcherContext()
}

Which means whenever a Job used the search method , it created a SearchContext that was never closed, and this old cached SearcherContext then gets reused by other jobs, and apparently it holds or references a cached version of the index.

The whole searcherContext opening was done to prevent this issue, where logs get flooded:

https://community.atlassian.com/t5/Marketplace-Apps-Integrations/Incorrect-usage-of-JIRA-lucene-search-API/qaq-p/1503542#U1902018

Hubbitus
Contributor
March 23, 2023

Hello.

If I understand correctly, @Joerg you made assumption what Groovy ScriptRunner job executor wrong manages search index?

Are there any fixes for that in some Groovy ScriptRunner version?

I also experience problem what I see estimation near JQL field what job will be run on 21 issues, but when I run it actually, by log I see there 40 processed.

Hubbitus
Contributor
March 23, 2023

Maybe some workarounds of such issue are present?

Joerg
Contributor
March 28, 2023

Hey.

Sorry, but it has been so long that I don't remember the exact details.

There was some strangeness going on there, but since we are preparing to move to Jira cloud we didn't put anymore time into researching these problems.

I remember it being the opposite of what you described, i.e. that it shows f.e. 5000 issues next to the jql field, but when the job script actually runs it only processes exactly 50% of the issues ever time until the number of issues to be processed is lower than a certain threshold and I don't know why.

Joerg
Contributor
March 30, 2023

Oh and be very careful when closing or creating a SearcherContext via script. Can lead to some scripts not running at all or returning false results.

Don't mess with this if things are working fine.

Hubbitus
Contributor
April 8, 2023

I believe I do not use SearcherContext in script explicitly.

I just want to process all issues, which are found by JQL. And now that work incorrectly - script processed more issues than actually found by JQL in settings. And that looks very strange for me.

1 vote
Joerg
Contributor
February 3, 2022

Alright so I have no idea what is going on, but something isn't working still.

This Escalation Job should at all times find 1 issue, and it does if I execute it, but not when it runs on its own:

ARGG.png

1 vote
Joerg
Contributor
February 1, 2022

It now randomly started to work. I have no idea why.

TAGS
AUG Leaders

Atlassian Community Events