I have modified my plugin to support Jira Data Center.
I use cluster lock to restrict one of my processes to run only in a single node at a time.
Question: In a cluster, we know that if one node is down then the process is transferred to another node. But when we use cluster locks we restrict one process into one node. In such cases, if that particular node is down, then is the load balancer switch the process to other node?
I'd think that if the lock was acquired by node "A" and "A" goes down, the lock simply gets released (either by timing out or the cluster orchestration mechanism recognizes that "A" is not available anymore). The process will not be moved to some other node, at least I don't think so.
Instead, this is your responsibility to start the process again either manually or via the CRON scheduler.
If this is a critical problem in your use case, I'd do this: every job execution should persist (to the database or even to a cluster-wide lock if it doesn't need to be long-living) its start time, its end time or at least some flag at completion.
That way you could periodically check if there was a job that got started, but not finished, and then re-try.
Simple and cluster-safe, because you are using cluster-aware components (database or shared cache).
If you spend enough time as a Jira admin - whether you are managing a single, mid-sized instance, a large enterprise one or juggling multiple instances at once - you will eventually find yourself in ...
Connect with like-minded Atlassian users at free events near you!Find a group
Connect with like-minded Atlassian users at free events near you!
Unfortunately there are no AUG chapters near you at the moment.Start an AUG
We're bringing product updates and pro tips on teamwork to ten cities around the world.Save your spot