It's not the same without you

Join the community to find out what other Atlassian users are discussing, debating and creating.

Atlassian Community Hero Image Collage

Jira Data Center Remote Cache not replicate to other nodes on updates

I'm testing my caching logic in Jira Data Center. 

CacheSettings settings = new CacheSettingsBuilder().remote().replicateViaCopy().expireAfterWrite(5, TimeUnit.MINUTES).build();
this.myCache = cacheManager.getCache(MY_CACHE_KEY, new LicenseUtilCacheLoader(), settings);

My cache needs to be highly consistent among all the nodes in the cluster. The update to a cache in a node needs to be replicated to other nodes.

# This ID must be unique across the cluster 
jira.node.id = node1
# The location of the shared home directory for all Jira nodes
jira.shared.home = /users/dev/jira/7133_shared
ehcache.listener.hostName = 127.0.0.1
ehcache.listener.port = 40001
ehcache.object.port = 40011

With the configuration above, the update of a cache isn't being replicated to other nodes in the cluster. But the removal of the cache will invalidate the cache in other nodes. 

 

Anything wrong with my implementation?

 

 

3 answers

1 accepted

0 votes
Answer accepted

********************************************************************************

*** Dear developers, please do not use caches replicated via copy in JIRA ***

********************************************************************************

Hi @William Tan -ServiceRocket- 

Please consider changing the design of your plugin to use caches caches replicated via invalidation, instead of caches replicated via copy. 

With the configuration above, the update of a cache isn't being replicated to other nodes in the cluster.

Please show me some code on how are you doing updates? Remember you would need to put the new value to the cache in order to replicate it. 

But the removal of the cache will invalidate the cache in other nodes. 

What do you mean by the removal of the cache? Are you talking about removing a value for a cache?

Paste some code on how you do updates and removes, also when do you initialise the cache and I will be able to help.

What do you mean by the removal of the cache? Are you talking about removing a value for a cache?

Yes, removing the cache value by calling myCache.remove(key);

The code below is a simplified version of our implementation.

 

Can I expect if I run the below code on 3 nodes: node1, node2, and node3, and the key(example_key) is the same

  • The first get request hit node1, and the value isn't in myCache, MyCacheLoader#load will be run to calculate the cache.
  • The second get request hit node2 or node3, myCache.get("example_key") should return value "0".
  • Updating the myCache with myCache.put("example_key", "1") in node3.
  • The get request(myCache.get("example_key")) in node1, and node2 should return value "1".
  • myCache.remove("example_key") in node1 will invalidate the cache in node2 and node3.

public class
MyCacheExample {
private static final String CACHE_KEY = "com.example.Example.cache";
protected Cache<String, String> myCache;

public MyCacheExample(CacheManager cacheManager) {
CacheSettings settings = new CacheSettingsBuilder().remote().replicateViaCopy().expireAfterWrite(5, TimeUnit.MINUTES).build();
this.myCache = cacheManager.getCache(CACHE_KEY, new MyCacheLoader(), settings);
}

public Boolean getCachedStatus(String key) {
String cachedValue = this.myCache.get(key);
return cachedValue != null && cachedValue.equals("1");
}

public void updateCache(String key, String value) {
this.myCache.put(key, value);
}

public void removeCache(String key) {
this.myCache.remove(key);
}

private class MyCacheLoader implements CacheLoader<String, String> {
@Nonnull
@Override
public String load(@Nonnull String key) {
// Re-calculate the cache
return "0";
}
}
}

Is my expectation correct? 

Hi @William Tan -ServiceRocket- ,

 

I think there is a cach miss-configuration problem here. You have a cache replicated by copy with a loader. 

 

1) Please consider changing this cache to a cache replicated via invalidation:

replicateViaInvalidation

Where the cache is getting the value from? If its from some persistent storage shared between nodes (database, file) you do the following operations when updating a value;

somePersistentStorageSharedBetweenNodes.updateEntity(entity);
cacheReplicatedViaInvalidation.remove(entity.key);

This will make sure that on all nodes entity.key will be removed from cache and the next call to cache.get(entity.key) will trigger the loader (on every node when called) to load this value - this will guarantee consistent values for this key across the cluster.

 

2) If for some reason this value can be calculated only once and can't be stored in shared resource (please explain why) then the cache replicated via copy should have no loader (null loader).

Then to have some value in the cache available across the cluster do the following:

Entity newEntity = calculateNewEntityValue();
cacheReplicatedViaCopy.put(entity.key, entity);

But then you also have: 

expireAfterWrite

which seems like a weird option for a cache replicated via copy. Usually you are using this cache if you can calculate some value once and only once. Then you have to take care all nodes (existing, or any node which will join the cluster in the future) will get the expected value with a cacheReplicatedViaCopy.put send from one of the nodes which knows this value. With nodes going online/offline this is really tricky to make it right. You will need to handle all nodes lifecycle, use cluster locks for synchronisation, ... etc. 

Please go with option (1): so a cache replicateViaInvalidation with a loader getting this value from a shared resource (database, shared home,...).

mac

 

Our Apps actually needed both options. Most of our cache getting value from an external system.  Thus, I think we can implement option 1 for most of the case.

 

Our app does synchronization between Jira and an external system.

eg. We allow the user to configure wpf to create an entity in the external system. In the external system, the user can configure pf to create Jira issue as well. In this case, if the user configures the post in both Jira and the external system, it will cause an infinite creation loop.

Due to some reason, there is no way for us to indicate that the entity in the external system is being created from Jira WPF. We implemented some logic around the Atlassian clusterlock and caching to prevent the creation lock. In this use case, we'll need the cache to be available across cluster and the cache needs to be accurate. (I can't be sure that caching is the right data source for this requirement, Please advice) 

 

You provided a very clear explanation of the caching implementation. I think I should be able to proceed. I wish I could find these implementation details on the Atlassian Cache 2 documentation.

 

 

Thanks @Maciej Swinarski 

Thx, I will try to update the documentation: https://jira.atlassian.com/browse/JRASERVER-69476

Not sure I fully understand but could the external system just check if the issue already exists before creating it? This would break the loop. Same for Jira creating the entity. 

Could you give some details on how the current logic using clusterlocks and caching is preventing it?

mac

0 votes

Hi @William Tan -ServiceRocket- 

 

Please consider changing the design of your plugin to use caches caches replicated via invalidation, instead of caches replicated via copy. 

 

With the configuration above, the update of a cache isn't being replicated to other nodes in the cluster.

 

Please show me some code on how are you doing updates? Remember you would need to put the new value to the cache in order to replicate it. 

 

But the removal of the cache will invalidate the cache in other nodes. 

 

What do you mean by the removal of the cache? Are you talking about removing a value for a cache?

 

Paste some code on how you do updates and removes, also when do you initialise the cache and I will be able to help.

 

mac

Suggest an answer

Log in or Sign up to answer
TAGS
Community showcase
Published in Data Center

August 2020 showcase: recent bug fixes and product improvements

G’day Atlassian Community ! In this post, I’ll share some of the most impactful bug fixes and product improvements we shipped over the last six months (and since this post ) to our self-man...

8,696 views 4 24
Read article

Community Events

Connect with like-minded Atlassian users at free events near you!

Find an event

Connect with like-minded Atlassian users at free events near you!

Unfortunately there are no Community Events near you at the moment.

Host an event

You're one step closer to meeting fellow Atlassian users at your local event. Learn more about Community Events

Events near you