Strategy for long-running pipelines

Sander Mol November 20, 2020

We are currently using Bitbucket Pipelines as an infrastructure for our companies GitOps-like platform. There seem to be many 'missing' features that currently block us from completely using this as our goto GitOps setup. Here is one of these.


As we use Bitbucket Pipelines for the automation of our operational services, we sometimes have the needs to run a long-running process to do certain stuff. This is mostly delegated to our services, but the Bitbucket Pipeline still acts as the logger and monitor shell around this task. We currently have the use-case of "Backing up a big database at a specific time every day". This specific task takes multiple hours.

There are currently 2 problems with this, where Bitbucket Pipelines cannot support this common use-case:

  • The maximum time a Pipeline can run is limited to 2 hours
  • We are paying for build minutes, where in actuality all of the processing power is delegated to our own servers

This, to me, indicates that Bitbucket Pipelines is not yet a fully fletched GitOps oriented platform. It sometimes seems to be advocated as such, but might it be better to find a distinction in this case in what Bitbuckets responsibility is?

We currently have to write our own bash scripts to circumvent this and essentially writing our own CI/CD-like structure (logging and monitoring).

There are many such problems where it just does not work fully. I am wondering if this an actual use-case for what Bitbucket Pipelines is designed for and if we might not have to look for a clear separation of responsibility of where Bitbucket Pipelines are designed for and what should probably be delegated to something else.



Log in or Sign up to comment
AUG Leaders

Atlassian Community Events