Cluster Lock

Sa Kan
Rising Star
Rising Star
Rising Stars are recognized for providing high-quality answers to other users. Rising Stars receive a certificate of achievement and are on the path to becoming Community Leaders.
November 2, 2017

I have modified my plugin to support Jira Data Center. 

I use cluster lock to restrict one of my processes to run only in a single node at a time. 

Question: In a cluster, we know that if one node is down then the process is transferred to another node. But when we use cluster locks we restrict one process into one node. In such cases, if that particular node is down, then is the load balancer switch the process to other node?

2 answers

2 accepted

0 votes
Answer accepted
Sa Kan
Rising Star
Rising Star
Rising Stars are recognized for providing high-quality answers to other users. Rising Stars receive a certificate of achievement and are on the path to becoming Community Leaders.
November 8, 2017

Yeah.. Got it. So, the Cluster Lock works only in a single host. SO, if that breaks, the process will stop. Thanks Aron.

0 votes
Answer accepted
Aron Gombas _Midori_
Community Leader
Community Leader
Community Leaders are connectors, ambassadors, and mentors. On the online community, they serve as thought leaders, product experts, and moderators.
November 3, 2017

I'd think that if the lock was acquired by node "A" and "A" goes down, the lock simply gets released (either by timing out or the cluster orchestration mechanism recognizes that "A" is not available anymore). The process will not be moved to some other node, at least I don't think so.

Instead, this is your responsibility to start the process again either manually or via the CRON scheduler.

Sa Kan
Rising Star
Rising Star
Rising Stars are recognized for providing high-quality answers to other users. Rising Stars receive a certificate of achievement and are on the path to becoming Community Leaders.
November 7, 2017

Thanks for the reply Aron. I also thought that it will stop and never continues. In such cases, we lose all the capabilities in a clustered architecture right? .. Anyway I will do my testing and post the answer here.

Aron Gombas _Midori_
Community Leader
Community Leader
Community Leaders are connectors, ambassadors, and mentors. On the online community, they serve as thought leaders, product experts, and moderators.
November 7, 2017

If this is a critical problem in your use case, I'd do this: every job execution should persist (to the database or even to a cluster-wide lock if it doesn't need to be long-living) its start time, its end time or at least some flag at completion.

That way you could periodically check if there was a job that got started, but not finished, and then re-try.

Simple and cluster-safe, because you are using cluster-aware components (database or shared cache).

Suggest an answer

Log in or Sign up to answer