You're on your way to the next level! Join the Kudos program to earn points and save your progress.
Level 1: Seed
25 / 150 points
1 badge earned
Challenges come and go, but your rewards stay with you. Do more to earn more!
What goes around comes around! Share the love by gifting kudos to your peers.
Keep earning points to reach the top of the leaderboard. It resets every quarter so you always have a chance!
Join now to unlock these features and more
Want to run a cold/standby for Bitbucket server for high availability. If there's an issue with the primary instance, we can shut it down and start the secondary instance which will connect to the NFS mount containing the shared data. Currently using NFS from a Windows server (yes it's a little slower than local stoarge, but works well), but was looking into using Amazon EFS instead.
When looking through Atlassian docs, the only thing I see regarding EFS is "You can't use Amazon Elastic File System (EFS) for Bitbucket Server's shared home directory.", but I don't see a listed reason as to why EFS can't be used over a NFS share.
Is there a particular/documented reason as to why EFS can not be used if already using NFS?
EFS and Git don't play very well together because of the way Git interacts with the file system. It's known to not be performant for Git.
Does Bitbucket Data Center clustering support Amazon EFS?
Not at this time. As noted in Amazon EFS Performance:
"The distributed nature of Amazon EFS […] results in a small latency overhead for each file operation. Due to this per-operation latency, overall throughput generally increases as the average I/O size increases, because the overhead is amortized over a larger amount of data."
Git typically involves many thousands of small file operations and the additional latency for each file operation means git is not suited to a distributed file system such as EFS. Since EFS's Max I/O option has an even higher latency, it is also not suitable for Bitbucket Data Center.
Source: Bitbucket Data Center FAQ
AWS announced an increase on IO read functions for EFS. https://aws.amazon.com/about-aws/whats-new/2020/04/amazon-elastic-file-system-announces-increase-in-read-operations-for-general-purpose-file-systems/
Would it be suitable for git operations now?
From my read of the announcement, I don't believe these changes in AWS will change our recommendation to not use EFS. While EFS increased the limit in read and write operations per second, it does not change (from what I can tell) the latency overhead for each operation. That latency overhead per operation is what will cause slow Git performance.
Isn't the main issue with git and EFS the fact that there's a bit of latency on each request, but native git is a huge amount of tiny files, each piece of latency adds up. Under LFS, while there might still be some latency, with less requests going to the files, it won't be a death by 1000 papercuts issue?
AWS annouced EFS now allows you to drive up to 3x higher read throughput on your file system. The problem is only for read operations.
This is an interesting discussion as we are going through setting up Bitbucket Datacenter with two instances on AWS cloud.
If not AWS EFS, what is the best recommended approach to share files on AWS?
Haven't seen a good documentation of this specifically for AWS which Bitbucket proposes.
Can someone help?
I asked a related question on AWS FSX ontap. Will the FSX ontap service's performance meet the requirements for a HA bitbucket environment?