I'm looking for a way to implement the following - increment (commit to repository) app's version on successful merge (build from jenkins). This increment step is part of jenkins pipeline, but I faced a problem when developers merge multiple pull requests in short period of time, so that build could not finish. The only solution I see is to have some sort of merge queue, so that bitbucket would actually merge branches when there is no ongoing build on source branch. Looking for any kind of advice / guide on how I can achive that. Couldn't find that functionality nor available plug-ins. Thank you in advance!
Instead of doing your build on the target branch, you could do it on the source branch and then set up your repository to not allow a merge to happen unless there is a successful build. That would avoid the scenario of having multiple pull requests being merged at the same time.
Hi, @Mikael Sandberg!
At the moment that is how my development is set - I trigger the build when pull request is opened, check if its build is successful, then the merge button is enabled. On the merge event the build is triggered on the source branch (resulting merge commit). My bitbucket is in corporate setup, so I might not have all options that you are referring to, but at the moment I dont see any that would say something like "successful build on source branch". Please confirm if that is what you meant.
Also, even if I resolve it this way I would still look for some queue implementation, as our team really loves merging a lot during the day, so checking back and forth if merge button is enabled or not would be really time consuming and distracting.
Once you do a merge there is no option to roll it back in Bitbucket, and I do not think there is an app that allows you to check if the target branch is being used in a build or not to prevent the merge.
How do you handle builds that fails on the target branch? Are you rolling back the merge from the source branch? If so, please be aware that Git do not remove that merge from history, as explained by Linus Torvalds:
Reverting a regular commit just effectively undoes what that commit did, and is fairly straightforward. But reverting a merge commit also undoes the _data_ that the commit changed, but it does absolutely nothing to the effects on _history_ that the merge had.
So the merge will still exist, and it will still be seen as joining the two branches together, and future merges will see that merge as the last shared state - and the revert that reverted the merge brought in will not affect that at all.
So a "revert" undoes the data changes, but it's very much _not_ an "undo" in the sense that it doesn't undo the effects of a commit on the repository history.
So if you think of "revert" as "undo", then you're going to always miss this part of reverts. Yes, it undoes the data, but no, it doesn't undo history.
So, here is how I would do it. In Jenkins, instead of running the build on both the source branch and then later on the target branch after the merge, why not merge the source branch to the target, and then run the build. The merge is done locally in the repository that Jenkins is using, so you can simply just revert the merge after the build is done.
Thank you very much for bringing in the case, I will definitely consider it in the future to improve the pipeline. It's just I haven't encountered build failures on target branch just yet, so your comment made me think it can happen.
The reason I have two builds is the following - build on the source branch lets the team be sure everything is okay on this particular pull request (it does couple analysis as well among other things) and adds some confidence it can be merged to the target branch without bringing in any problems, while build on the target would update some metrics of the artifacts, send it to testers... and update the version that in my vision should reflect the pull requests being made. The versioning thing should make it easy to know how many features are added (merges have been made) and coordinate where to look when there is a need.
I am curious to know if what I am trying to do is very rare case, if there is no implementation in Bitbucket. Can you, please, share how you would achieve it (I am talking about incrementing version on each pull request being made)? Happy to explain it more or add details of why we need this, if it is still not that clear.
The local merge in Jenkins before actual merge in server is very good idea to avoid some nasty problems in the future. Thank you again for the tip.
I do like the idea! I would really appreciate it, If you could also suggest me on how to achieve it in the best way given Bitbucket environment. As I see it, merge button becomes useless in my case. I can add a button using "Pull request notification" add-on that will trigger "merge job" so to say, which is going to merge it locally in Jenkins workspace (as you said earlier), and, if everything goes well, will do the merge. Please, confirm, if that is what you meant in your suggestions.
Out of topic, but actually following the title of my question - is there really no need in such a thing as merge queue? It felt native to me. I suppose big projects use some other kind of versioning strategies, if this issue is not encountered.
Yes, if you automate the merge after a successful build the merge button will only be for show. I would use the REST API to trigger the merge in Bitbucket so your pull request gets updated. The only scenario when this would not work would be if the pull request is not ready to be merged even if the build is successful, but that also depends on how you use pull requests.
The way we did versioning at my previous job was triggered from Jenkins, and not based on pull requests in Bitbucket. We did run builds on pull requests too, but that was just as a sanity check to make sure that the build was successful before it was merged into our integration branch.
Beginning on April 4th, we will be implementing push limits. This means that your push cannot be completed if it is over 3.5 GB. If you do attempt to complete a push that is over 3.5 GB, it will fail...