Are automation rules resource-hungry?

Andrei [errno]
Rising Star
Rising Star
Rising Stars are recognized for providing high-quality answers to other users. Rising Stars receive a certificate of achievement and are on the path to becoming Community Leaders.
November 15, 2016

a generic question...

with power of Atlassian's own Automation add-on and a new https://marketplace.atlassian.com/plugins/com.codebarrel.addons.automation/server/overview comes a price paid in resource utilization... what does it mean for a busy JIRA instance?

For example: for an "issue created" trigger - every event (across all projects) is being checked against a condition (some JQL). Sounds like a lot of activity if we have hundreds of new tickets created every hour and only 1 might trigger an event per day... is the overhead of automation triggers worth it? or adding multiple (dozens to hundreds) custom events is the way to go in this case? 

I guess I am after a recommendation about a sweet spot between multiple custom events in the system vs one generic event (issue created) checked against multiple JQL conditions. Or similar...

what would be a more effective solve resource-wise? and are these automation plugins resource-hungry?

thanks!

 

2 answers

1 accepted

4 votes
Answer accepted
andreas
Rising Star
Rising Star
Rising Stars are recognized for providing high-quality answers to other users. Rising Stars receive a certificate of achievement and are on the path to becoming Community Leaders.
November 15, 2016

Hi Errno,

I'm the co-founder of Code Barrel - we are the creators of https://marketplace.atlassian.com/plugins/com.codebarrel.addons.automation/server/overview.

Myself and my co-founder worked for Atlassian on JIRA for over 10 years, so we understand JIRA and performance concerns for large customers pretty well. Having said that there's no simple answer when it comes to performance - your mileage may vary depending on your unique setup (server hardware, database, memory, number of issues, datacenter, number of active users...).

However I can shed a bit of light on some of the architectural decisions around how we implemented Automation for JIRA Server to reduce its performance impact:

  • Firstly we do as little work as possible synchronously when it comes to processing issue events. JIRA's events are synchronous by default, so we hand the event off to a background thread for processing as soon as possible, so that we have as little impact as possible on the performance of the UI thread (thus minimising impact on the user).
  • The processing of the automation rule is done using a single threaded executor. This ensures we use as few resources as possible for the overall stability of the system, with the tradeoff that sometimes rule executions could be a little delayed (executions get queued if there's a lot of events). In practice this performs very well.
  • If you're using JIRA Data center, then Automation for JIRA will scale with the number of nodes in your datacenter. Basically when an issue created event arrives, it will only be processed by a single node in the data center by Automation for JIRA, leaving the other nodes to do other things.
  • When we implemented server, we re-implemented our JIRA client to make API calls instead of REST calls to reduce the amount of overhead on JIRA (and API calls are faster than loopback network requests obviously). In cloud the only options was network calls, but in server we only make internal API calls.


We may revisit the single threaded rule processor in future when we get more large server customers trying our add-on, but we know from Cloud that this approach scales quite well. In Cloud we have a single Kinesis queue processing requests from *all* JIRA instances using our add-on (several hundred). So this is pretty much equivalent to the single threaded model in server, but with multiple JIRA instances. Granted Cloud instances are much smaller and less active than large JIRA server instances, but it's still good validation. In cloud we can simply scale horizontally by adding more shards to the Kinesis queue. In server we could use a similar approach by adding more rule executor threads but for now we believe it's better to keep the footprint small.

So in summary I wouldn't be too worried about pre-maturely optimising events or rules in your system. The impact should be quite low.

Hope this helps - if you have any more questions we'd be happy to answer them here!

Cheers,
Andreas

1 vote
Tommy Wilkinson November 15, 2016

Hey Erno.

We use quite a lot of triggers for a lot of different events in our workflow and things seem to be running fine. We don't create hundreds of tickets an hour but we do have a fair few triggers (push notifications to slack and internal systems etc) and our system is working fine.

 

 

Suggest an answer

Log in or Sign up to answer
TAGS
AUG Leaders

Atlassian Community Events