Showing results for 
Search instead for 
Did you mean: 
Sign up Log in
Deleted user
0 / 0 points
badges earned

Your Points Tracker
  • Global
  • Feed

Badge for your thoughts?

You're enrolled in our new beta rewards program. Join our group to get the inside scoop and share your feedback.

Join group
Give the gift of kudos
You have 0 kudos available to give
Who do you want to recognize?
Why do you want to recognize them?
Great job appreciating your peers!
Check back soon to give more kudos.

Past Kudos Given
No kudos given
You haven't given any kudos yet. Share the love above and you'll see it here.

It's not the same without you

Join the community to find out what other Atlassian users are discussing, debating and creating.

Atlassian Community Hero Image Collage

Can I deliberately fail a pipeline through API or any other means

I have a setup in which the service created by my bitbucket pipelines is deployed on a machine which is not publicly accessible. It is deployed through a pull based mechanism in which a daemon periodically checks for the new versions of the service and runs them. I want to link the pipelines with the success/failure of running this service i.e. if the daemon fails to run the service, it should somehow also fail the pipeline. Is this possible through any means? 

I checked the pipelines API

And the only capability it provides me is to stop a running pipeline which doesn't fulfill my use case 

1 answer

0 votes
mkleint Atlassian Team Oct 09, 2019

generally a pipeline fails when any of the commands run in that step exits with non zero exit code.

Is there a way for you to check in the pipeline step that the service daemon failed and exit 1? Or what exactly is happening in the pipeline when the daemon deploys stuff? is it somehow waiting for it to finish? or it is more or a fire-and-forget process and you want to change the result of a pipeline ex post after it actually finished?

It's the last one. The pipeline simply publishes the artifacts somewhere and ends. The daemon is running independently and it pulls the artifacts and deploys them on a certain cluster. Ideally I'd want to link the two. I want that the pipeline shouldn't end until the daemon process completes and reports to the pipeline whether the deployment has failed but I can't link the two directly as the daemon process is running inside a private network.

mkleint Atlassian Team Oct 10, 2019

Theoretically one option would be to have a manual step triggered from outside after the daemon process completes. In that manual step one would query the deployment and pass/fail based on the result. However AFAIK there is no public API to trigger a manual step.

Also the pipeline would appear green even if the manual step was never triggered so the level of trust there is suboptimal. Ideally we would keep such pipeline in 'running' state until the callback from outside appears or timeouts.

In any case it's not a feature that is currently supported by pipelines as far as I can tell.

Suggest an answer

Log in or Sign up to answer
Community showcase
Published in Bitbucket Pipelines

Bitbucket Pipelines Runners is now in open beta

We are excited to announce the open beta program for self-hosted runners. Bitbucket Pipelines Runners is available to everyone. Please try it and let us know your feedback. If you have any issue...

576 views 14 9
Read article

Community Events

Connect with like-minded Atlassian users at free events near you!

Find an event

Connect with like-minded Atlassian users at free events near you!

Unfortunately there are no Community Events near you at the moment.

Host an event

You're one step closer to meeting fellow Atlassian users at your local event. Learn more about Community Events

Events near you