You're on your way to the next level! Join the Kudos program to earn points and save your progress.
Level 1: Seed
25 / 150 points
1 badge earned
Challenges come and go, but your rewards stay with you. Do more to earn more!
What goes around comes around! Share the love by gifting kudos to your peers.
Keep earning points to reach the top of the leaderboard. It resets every quarter so you always have a chance!
Join now to unlock these features and more
I am assuming you want to configure both instances to connect to a single shared database, right? Otherwise they would not share reviews data etc.
I don't think it is a good idea to keep hot failover server connected to the same database. That would result in duplicate emails being sent with review reminders for example. Also, some of the sequences like review perm id generator are kept in memory, so keeping two instances connected to the same database would result in review perm id collisions. What about keeping it as a cold failover server though? You can keep it fully configured and ready for start, but not actually started until the primary server crashed.
Finally, you also need to ensure file system is synchronised between two servers. Bear in mind NFS is not supported, so you may want to set up some rsync synchronisation to run periodically from primary node to the failover one. And obviously such synchronisation would need to be run again if primary server crashed, assuming you can still access its file systems.
Hope that helps,
Thanks for your thought. We currently have a similar setup as you said.
The secondary instance is Cold and we run a "rsync" between the 2 Filesystems to keep the DATADIR in-sync. But for last few Months we have see the Data on the Primary is getting corrupted ( See : https://support.atlassian.com/servicedesk/customer#fsh/problem-report-13977)
Once we stop the "rsync" the error did not return back.
Though theoritically we know that "rsync" would not change/put lock/corrupt the Source at all.
Moreover it is also true, that "rsync"-ing a running Instance of Fishye cache may give us a cache in an unstable state in the Secondary and we may not be able to fully recover as the cache that is carried over to secondary ( while the Fisheye was still running in the Primary ) may be is "in-process" kind of state.
So considering the above scenarios/conditions we are looking for an Alternative way to have an alternative way to handle resiliency.
Your thoughts ?
INST_FOLDER/cache/globalfe will be generated automatically from existing Repository Cache file during startup. You can delete INST_FOLDER/cache/globalfe before starting up 2nd FishEye/Crucible to let it re-generate again.
INST_FOLDER/cache/cruidx is cache file from Crucible Reviews. It is faster to be index compared to FishEye Repository. You can just index it at Administration > Global Settings > Crucible after 2nd FishEye/Crucible is started
By the way, we can use this REST API to check FishEye Repository indexing state too: https://docs.atlassian.com/fisheye-crucible/latest/wadl/fecru.html#rest-service-fecru:indexing-status-v1:status:repoName