Create
cancel
Showing results for 
Search instead for 
Did you mean: 
Sign up Log in
Celebration

Earn badges and make progress

You're on your way to the next level! Join the Kudos program to earn points and save your progress.

Deleted user Avatar
Deleted user

Level 1: Seed

25 / 150 points

Next: Root

Avatar

1 badge earned

Collect

Participate in fun challenges

Challenges come and go, but your rewards stay with you. Do more to earn more!

Challenges
Coins

Gift kudos to your peers

What goes around comes around! Share the love by gifting kudos to your peers.

Recognition
Ribbon

Rise up in the ranks

Keep earning points to reach the top of the leaderboard. It resets every quarter so you always have a chance!

Leaderboard

Come for the products,
stay for the community

The Atlassian Community can help you and your team get more value out of Atlassian products and practices.

Atlassian Community about banner
4,463,027
Community Members
 
Community Events
176
Community Groups

Configuring Minimum Required Successful Builds

Hi, 

I have a project with two builds run against our pull requests. One bamboo build is very slow, one is very fast. 

When I turn on the merge check "minimum number of builds: 1", I can only merge if both builds have passed and have completed. This doesn't match my expectation, which is that it would (at the very least) allow me to merge if i had 1 passing and 1 failing build. 

Is there anyway to circumvent this behavior without turning off the merge check entirely? 

 

2 answers

1 vote
Lucy Atlassian Team May 17, 2018

Hi Jacob,

This is the intended behaviour. If you look at the dialog where you configure the number of required builds it says:

If there are more builds than specified above, they are all required to be successful in order to merge the pull request.

However, BBS also assumes you'd want to wait for any in progress builds, and will block a merge for those builds as well. This isn't clear in the messaging though. I've raised a suggestion to clarify this discrepancy.

Cheers,
Lucy

Hi Lucy.

Due to the comment, all builds should be successful, no matter which number you specify, is that really expected behaviour?

Is there any way to check only last build?

Thanks, 

Nataliia

Like # people like this

Lucy, I'm dealing with the same thing. If you've got 10 builds, and the first 2 fail, you have to somehow get the first 2 to run again even though the last 8 have been successful. This is not good behavior. 

Like Justin van der Zee likes this

I really don't get this "feature" either. Only in a picture perfect world where no build is flaky would you ever want this. 

For example, I run a build for a commit (using Jenkins). The build fails because of some misconfiguration / unavaible 3rd party server or whatever. You rerun the build and this time it succeeds.

But since it's the same commit you ran the previous build against you can't merge...

The name of this "feature" implies the opposite of what it actually does. The description of this feature quite literally reads:

Require at least the specified number of successful builds.

If the number of minimum successful builds is set to 1, that should mean if there is at least 1 successful build (ideally the most recent, but the feature doesn't seem to take that into account either) then the merge check is satisfied.

The note in the dialog when configuring the feature reads:

If there are more builds than specified above, they are all required to be successful in order to merge the pull request.

But, the "all required to be successful" part essentially nullifies the core function of the feature (based on its description).

The example by @Justin van der Zee above is the exact same scenario I am dealing with; Jenkins marked an initial build of a commit in a branch as failed due to a condition that had nothing to do with the actual code being built (did not require a code change to resolve) and all subsequent manual builds have been marked as a successful (5 builds of the same commit SHA-1 hash, 1/first failed, 4 succeeded) but the PR is unable to be merged because of this feature.

Makes no sense.

Like John Bain likes this

@digitaljhelms 

 

Was this because you are notifying the BitBucket instance outside of the groovy block, or Jenkins config section, which does the actual compiling? A quick fix for this would be to put in a try/catch loop only during your compile time, then set the build status, and make the API call based on that. This way, if the build fails for any other reason (let's say it can't connect to a remote machine to download a non-compile dependency, or maybe can't find a file in the workspace, or any other operation not related to compiling and packaging) then it will not send the build status to the BitBucket instance, yet would still fail in Jenkins. 

Hope this helps.

The way I see this issue, there are a few possible solutions, either if you're using Jenkins, or not, as the BitBucket API caller. 1.) Have separate builds jobs. One used by developers on their branches to simply test compile code they've just committed, while another build job serves solely as an integration build job, which will perform the API call to BitBucket, and update the build status. 2.) Create a boolean parameter in your build job which developers can use, which would flag a particular build as one which will send an API call to the BB server (can be done simply with groovy, using two separate defined build methods), while having the parameter set to false would not make the API call. 

Either way, this feature is broken from the get go, in my opinion. One should not have to stack build job upon build job into their infrastructure just to get build metrics reported to BitBucket so you can pass integration checkpoints, without having the ability to triage compile errors on the same remote build machine (Jenkins), and build job (this also makes infrastructure as code an incredible practice in redundant frustration). Additionally, someone didn't really seem to think through the logic of "Minimum Successful Builds" when implementing this, as the metadata being implemented by BitBucket from the build includes the build key (path to build), which includes the build number, and the commit hash, giving it a unique value each time. This makes it easy to establish the latest build status, and use that to allow integration. The fact that all builds which send the build status to BitBucket are considered, even though you may set your minimum successful build value to 1, implies that the constant value for this is n, and completely indeterminate. Frankly, it's just frustrating. It could also be fixed incredibly easily, because you know, time stamps are a thing, too.

I actually find it humorous that whomever developed this feature seems to have been so autocratic that they feel every build, no matter what the state, for every branch, at any time, must always compile! Ah ...if the world were only that perfect.

Suggest an answer

Log in or Sign up to answer
TAGS

Atlassian Community Events