Create
cancel
Showing results for 
Search instead for 
Did you mean: 
Sign up Log in

Multiple Pipelines for multi-project monorepo

Jon Boerner July 9, 2018

Hi all,

We have a repository that has a few related but distinct projects in multiple subdirectories (e.g. a directory for some ETL related work based on Scala, another for a webapp backed based on Kotlin, another for the frontend using webpack).

Is it possible to configure multiple pipelines for each of the subdirectories? I took a wild guess and created a `bitbucket-pipelines.yml` file in one of the subdirectories, but it didn't seem to create a pipeline. 

Is there a recommended approach for doing this? For similar use cases on Jenkins in the past, when you create a pipeline you have the option of specifying where the Jenkinsfile you would like to use is located (i.e. specify a subdirectory) and create multiple jobs. Is there something equivalent here?

Thanks!

Jon

3 answers

6 votes
StannousBaratheon
Atlassian Team
Atlassian Team members are employees working across the company in a wide variety of roles.
July 13, 2018

Hi @Jon Boerner

It's not possible to configure multiple pipelines for different subdirectories but there are a few workarounds that might work for your use case:

  1. Configure multiple parallel steps with each one running the build for a specific subdirectory. Note that every step will of course run on every commit but this might be acceptable. https://confluence.atlassian.com/bitbucket/parallel-steps-946606807.html
  2. Adopt a branch workflow whereby each subdirectory project is developed on a different branch. This will allow you to configure a branch pipeline for each subdirectory: https://confluence.atlassian.com/bitbucket/branch-workflows-856697482.html
  3. As part of the build script itself, use the Bitbucket API to determine which subdirectory build script(s) to run based off the files that were changed in a given commit. https://developer.atlassian.com/bitbucket/api/2/reference/resource/repositories/%7Busername%7D/%7Brepo_slug%7D/diff/%7Bspec%7D

Regards

Sam

Michael Brizic June 8, 2019

@StannousBaratheon 

I like option #3 but am curious: Since the "pipeline", i.e. Bitbucket's execution of my build script as a step, needs to run in order to check whether to run the actual building of the subproject, in essence, isn't the Bitbucket pipeline running? In other words, I still have to run at least a portion of the pipeline to determine whether running the whole pipeline, on a particular subproject, is necessary. Is this understanding correct? Lastly, in order to detect whether to run the whole build pipeline on a subproject, it will use build minutes simply performing the check as to whether to build fully or not. Is that correct?

StannousBaratheon
Atlassian Team
Atlassian Team members are employees working across the company in a wide variety of roles.
June 10, 2019

I imagine you'd always want a pipeline to run, the first part of the script just determines what build script to execute. For example you might have a project:

/
|
|-- bitbucket-pipelines.yml <- determine which build script to run in here
|
|
|-- Project A
| |
| |-- build.sh
|
|
|-- Project B
|
|-- build.sh

In other words there isn't a pipeline per project, just a single pipeline that determines which build script to run. It's an imperfect workaround but possibly suitable for some users. I personally prefer option #2 as it allows you to track a subproject's build  status by branch.

Michael Brizic June 10, 2019

Makes sense, I am also just confirming that at the point where "determine which build script to run in here" executes, the pipeline is already running to your point, it's now only a matter of what else to execute, i.e. Project A's build.sh or Project B's. In other words, build minutes are already being increased the moment the pipeline starts regardless of whether anything is actually built or not.

StannousBaratheon
Atlassian Team
Atlassian Team members are employees working across the company in a wide variety of roles.
June 10, 2019

That's correct

Like Michael Brizic likes this
Chandara Chea July 10, 2020

@StannousBaratheon if you prefer option #2, could you give more clarification ? How do you manage the deployment for different environment ? Does it mean if we have 3 services and 3 environments, then we need to have 9 branches assumed that we want to do automatic deployment from branch to cloud environment? 

StannousBaratheon
Atlassian Team
Atlassian Team members are employees working across the company in a wide variety of roles.
July 11, 2020

@Chandara Chea A single pipeline can deploy to multiple environments. For example if you have 3 environments, your pipeline might consist of 4 steps:

  1. Build your service
  2. Deploy to the test environment
  3. Deploy to the staging environment
  4. Deploy to production

With this in mind, if you have 3 services in a mono-repo that you want to build and deploy separately you could have 3 branches (one for each service) and a branch pipeline for each branch that performs the build and deploys to all 3 environments as described above.

The documentation on Bitbucket deployments explains how to configure deployment environments for your pipelines: https://support.atlassian.com/bitbucket-cloud/docs/set-up-bitbucket-deployments/

 

Automatic vs manual deployments are determined by the step's "trigger" property. Step will run automatically by default. If you wish to perform a manual deployment simply add `trigger: manual` to the relevant deployment step.

 

Bitbucket pipelines has recently released a new feature called conditional steps that provides a new option for builds in mono-repos. This option allows you to specify if a step should run based on a changeset pattern. With this approach a single main branch can be used for all services. The pipeline would then contain a conditional step for each service that builds the service whenever a file in that service is changed. Please see the following blog post for more information about conditional steps: https://bitbucket.org/blog/conditional-steps-and-improvements-to-logs-in-bitbucket-pipelines

 

I wouldn't necessarily advocate one approach over the other. It's really incumbent on the project teams to understand the pros and cons of each and decide which works best for them.

Like Raul Gomis likes this
5 votes
Christian Goudreau July 5, 2021

What you are looking for is 

condition:

    changesets:

https://support.atlassian.com/bitbucket-cloud/docs/configure-bitbucket-pipelinesyml/#condition

 

We also have a monorepo setup with multiple projects and subprojects where the build (and deployment) is triggered depending on files that have been modified.

We use an automatic tagging approach where if a serie of tests passes, we tag the build to be deployed or rollback. 

Simplified example: backend.staging.[buildnumber] and frontend.staging.[buildnumber]

We use the same image from one environment to another (test, preprod, prod, userA, etc.) as we promote them inside our kubernetes clusters. All that done continuously.


While it works really great, some things to consider impoving : 

  • we can only deploy environments one at a time. The first that bitbucket builds between backend and frontend will be build, the other one will be paused. I would prefer to have control over that limitation since concurrent build to the same environment is safe in our architecture
  • Also, I would like to "compose" bitbucket-pipeline files. Ours will be unmanageable at some point :D 
Rion Dooley March 10, 2022

Composition would be huge. We are coming from a gitlab environment, and regularly use a library of gitlab-ci files to standardize our build. While we could handle a lot of that by writing custom pipes, it reduces transparency in the build process and adds a lot of overhead and additional repos and pipelines to be created to accomplish the same thing. 

Like Christian Goudreau likes this
3 votes
jan.cabala January 31, 2020

We were using monorepo wtih Gradle as build tool and bitbucket pipelines as ci tool.

 

In short, there is one "main" pipeline defined for master branch and multiple custom pipelines per service (separate project in some folder). Main pipeline is starting shell script which contains logic for resolving which services (projects) were changed and trigger corresponding pipeline via REST API.

 

See setup showcase here:

https://github.com/zladovan/monorepo

Suggest an answer

Log in or Sign up to answer
TAGS
AUG Leaders

Atlassian Community Events