I have modified my plugin to support Jira Data Center.
I use cluster lock to restrict one of my processes to run only in a single node at a time.
Question: In a cluster, we know that if one node is down then the process is transferred to another node. But when we use cluster locks we restrict one process into one node. In such cases, if that particular node is down, then is the load balancer switch the process to other node?
I'd think that if the lock was acquired by node "A" and "A" goes down, the lock simply gets released (either by timing out or the cluster orchestration mechanism recognizes that "A" is not available anymore). The process will not be moved to some other node, at least I don't think so.
Instead, this is your responsibility to start the process again either manually or via the CRON scheduler.
If this is a critical problem in your use case, I'd do this: every job execution should persist (to the database or even to a cluster-wide lock if it doesn't need to be long-living) its start time, its end time or at least some flag at completion.
That way you could periodically check if there was a job that got started, but not finished, and then re-try.
Simple and cluster-safe, because you are using cluster-aware components (database or shared cache).
I’m a designer on the Jira team. For a long time, I’ve fielded questions from other designers about how they should be using Jira Software with their design team. I’ve also heard feedback from other ...
Connect with like-minded Atlassian users at free events near you!Find a group
Connect with like-minded Atlassian users at free events near you!
Unfortunately there are no AUG chapters near you at the moment.Start an AUG
You're one step closer to meeting fellow Atlassian users at your local meet up. Learn more about AUGs