Since last week I'm facing an issue while merging any Pull Request it is taking more time compare to previous experience.
It shows like below:
Need to refresh page then again have to press "merge". Sometimes it asks for "merge" but when we hit "merge" then it says "This pull request is already closed."
Hello and welcome to the community.
Our engineering teams are in the process of migrating many of our core services onto new infrastructure. As part of this migration, we are aware that certain operations including those that require significant file system I/O may perform more slowly than usual.
It is helpful to realize that pull request merges are an asynchronous operation. This means that after you click “Merge”, this triggers work in the background to take care of merging the changes in your pull request into the destination branch. In fact, once you see the message “Merge in progress” on the page, you can safely navigate away. When you revisit the pull request a few minutes later, it will be merged.
While merges may take longer than normal over the next few days, rest assured they are still working! We will update our status page if our monitoring systems ever detect that merges are actually failing to complete successfully.
I wanted to share the following blog post from our head of engineering regarding the infrastructure changes in Bitbucket Cloud, the issues that came with it, and what our plan is:
This blog post, among other things, explains how merge tasks take longer post our infrastructure migration. We have made the merge tasks run asynchronously so your team wouldn’t be blocked from doing other activities on our platform while their PRs are being merged and can navigate away from the page. In the meantime, it is our primary goal to continue to work on improvements and to continue identifying and removing potential bottlenecks that are causing delays.
Our team is aware that slower merge times are having an impact on our customers and is working tirelessly on multiple initiatives to identify and eliminate the bottlenecks that are contributing to these delays. We are deploying small improvements daily.
At this point, it's hard to share the timeline around when we anticipate the merge times to improve, but rest assured, our team is treating it as a top priority.
Thanks for the update Theodora. Unfortunately, making it an async process doesn't help at all. Many times our engineers are sitting idle waiting for pipelines to trigger from the merge or are waiting for a merge operation to complete so they can pull master and create a new branch.
When it takes 5+ minutes for a merge to happen this reduces the productivity of our engineering staff and makes it less likely they will submit smaller, more consumable, PR's instead of large, unwieldy PRs, where mistakes may be overlooked.
I think there's a lack of empathy and understanding of how users are consuming the service that is minimizing the impact and severity of this issue internally at Atlassian. Let me be clear, your customers are losing time and money due to this problem. Will this issue alone be enough for some organizations to switch to another provider? Perhaps... it definitely makes it harder to defend not using the current leading name in the industry.
Above all I'd like to very sincerely apologize for the frustrations our product is clearly causing you and your team! We don't take this lightly. We are engineers ourselves and we know exactly how important good tooling is for what we do. Please believe me when I say we strive to resolve issues with our product so our customers can be their most productive!
As the blogpost linked above summarized, we've had to make some large architectural and infrastructural changes recently and the journey turned out bumpier than we'd expected. The team is working tirelessly around the clock with our teams split among multiple timezones around the globe identifying bottlenecks and reducing the negative impacts of these changes. Your observations of long merges and your notes on the negative impact of those changes on the teams is what is driving us.
Over the past few weeks we have simplified our locking implementation to eliminate unnecessary contention while preserving data consistency, reviewed and pruned the list of pre- and post-receive hooks that run for internal Git operations, optimized the queuing layer of our pull request merge tasks so as to enhance our ability to scale up our system in case we detect bottlenecks and reconfigured our infrastructure to scale up more efficiently. Our internal monitoring systems show that the merge times should now be significantly shorter, though we do still see occasional spikes particularly during peak times that we continue to debug and work through.
I hope to see you continue being our valued customer and stick with us through these times so we can all enjoy Bitbucket Cloud together in its full potential on the other side of this!
Engineering Manager, Atlassian Bitbucket Cloud
Thank you Katarina for the thoroughness and transparency of your update. It's a relief to know that engineering resources are focused on this issue and that we can expect things to improve relatively soon. Given the complexity of the undertaking we couldn't ask for much more from your team.
I very much hope we can make it through these rough times and enjoy Bitbucket Cloud as it is designed to be going forward. If you continue to make large strides like this to close the performance/stability gaps it shouldn't be an issue.
Thanks again for the update and the tremendous effort.
Beginning on April 4th, we will be implementing push limits. This means that your push cannot be completed if it is over 3.5 GB. If you do attempt to complete a push that is over 3.5 GB, it will fail...
Connect with like-minded Atlassian users at free events near you!Find an event
Connect with like-minded Atlassian users at free events near you!
Unfortunately there are no Community Events near you at the moment.Host an event
You're one step closer to meeting fellow Atlassian users at your local event. Learn more about Community Events