In our use case we make use of issues with sub-tasks. The issues act almost like a folder for the sub-tasks and are intended to steer all the sub-tasks (e.g. the priority of the issue defines the priority of the sub-tasks). The number of sub-tasks can be rather ridiculously large (e.g. 10k).
Occasionally we may need to change priority or add additional descriptions to all the sub-tasks of an issue. We created a manually triggered automation to cascade changes down to sub-tasks using related items. Because of the large number of sub-tasks it's resulted in automation runs being throttled.
The speed at which updates occur is not particularly critical, so I'm wondering if throttle means that the rest of the changes are aborted or they just slow down the processing.
We realize that we're running into this problem because sub-tasks probably weren't intended to be used at this scale. We also realize that we could also use the bulk change functionality multiple times, but from our perspective it's inconvenient to keep repeating the update so many times and increases the likelihood that we introduce inconsistencies.
Looking for the answer to the question or alternative ideas.
Thanks in advance!
Hi @Kevin Phang
Without seeing your rule, I wonder if this can even work: rules have a processing limit of 100 items for triggers, branches, lookup issues, etc. Unless your rule is constructed to repeatedly run in chunks (e.g. with a scheduled trigger and JQL condition), I wonder how this can work. Are you observing a rule process more than 100 subtasks with one rule execution?
And even if it is working intermittently, your processing limits for your license level are probably leading to the throttling: https://support.atlassian.com/cloud-automation/docs/automation-service-limits/
Next, throttling: just like with an automation engine outage/production incident, I hypothesize there is no guarantee such rules will pick up where they left off...or even run based on previously raised events. Your rule would need some mechanism to indicate the state at time of halting to support restarting...which is not a built-in feature, I believe.
Finally, I am curious as what is your use case for having 10k subtasks with an issue. That seems like quite a lot for one issue association.
Kind regards,
Bill
Thanks for your reply, @Bill Sheboy . We use the related items branching so it does access more than 100 (and more than the JQL 1000). But after more investigation it does look like it fails at a certain point and does not resume, so your hypothesis was correct.
As for our use case:
We use Jira to co-ordinate the work of a web research team. We use the issue as a "project", which can have 10k tasks. These "projects" are not always completed, so we keep outstanding research tasks in the queue in the event that higher priority research tasks are completed so that we always have work in the hopper available.
I've also considered whether we need to split the issues into a max size and make use of epics, then at any given time we're only handling the prioritization of say 1k subtasks.
I've also been looking into different marketplace alternatives in the form of JXL to conveniently perform large bulk updates and Scriptrunner as it mentions unlimited automations and the option of more targeted queries to make updates more efficient.
Maybe I'm just not using the right product for my use case :)
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.