I have modified my plugin to support Jira Data Center.
I use cluster lock to restrict one of my processes to run only in a single node at a time.
Question: In a cluster, we know that if one node is down then the process is transferred to another node. But when we use cluster locks we restrict one process into one node. In such cases, if that particular node is down, then is the load balancer switch the process to other node?
I'd think that if the lock was acquired by node "A" and "A" goes down, the lock simply gets released (either by timing out or the cluster orchestration mechanism recognizes that "A" is not available anymore). The process will not be moved to some other node, at least I don't think so.
Instead, this is your responsibility to start the process again either manually or via the CRON scheduler.
Thanks for the reply Aron. I also thought that it will stop and never continues. In such cases, we lose all the capabilities in a clustered architecture right? .. Anyway I will do my testing and post the answer here.
If this is a critical problem in your use case, I'd do this: every job execution should persist (to the database or even to a cluster-wide lock if it doesn't need to be long-living) its start time, its end time or at least some flag at completion.
That way you could periodically check if there was a job that got started, but not finished, and then re-try.
Simple and cluster-safe, because you are using cluster-aware components (database or shared cache).
I have multiple projects that use variations of the same base workflow. The variations depend on the requirements of the project or issue type. The variations mostly come in the form of new statuses ...
Connect with like-minded Atlassian users at free events near you!Find an event
Connect with like-minded Atlassian users at free events near you!
Unfortunately there are no Community Events near you at the moment.Host an event
You're one step closer to meeting fellow Atlassian users at your local event. Learn more about Community Events