Create
cancel
Showing results for 
Search instead for 
Did you mean: 
Sign up Log in
Deleted user
Level
0 / 0 points
Next:
badges earned

Your Points Tracker
Challenges
Leaderboard
  • Global
  • Feed

Badge for your thoughts?

You're enrolled in our new beta rewards program. Join our group to get the inside scoop and share your feedback.

Join group
Recognition
Give the gift of kudos
You have 0 kudos available to give
Who do you want to recognize?
Why do you want to recognize them?
Kudos
Great job appreciating your peers!
Check back soon to give more kudos.

Past Kudos Given
No kudos given
You haven't given any kudos yet. Share the love above and you'll see it here.

It's not the same without you

Join the community to find out what other Atlassian users are discussing, debating and creating.

Atlassian Community Hero Image Collage

Multiple Pipelines for multi-project monorepo

Hi all,

We have a repository that has a few related but distinct projects in multiple subdirectories (e.g. a directory for some ETL related work based on Scala, another for a webapp backed based on Kotlin, another for the frontend using webpack).

Is it possible to configure multiple pipelines for each of the subdirectories? I took a wild guess and created a `bitbucket-pipelines.yml` file in one of the subdirectories, but it didn't seem to create a pipeline. 

Is there a recommended approach for doing this? For similar use cases on Jenkins in the past, when you create a pipeline you have the option of specifying where the Jenkinsfile you would like to use is located (i.e. specify a subdirectory) and create multiple jobs. Is there something equivalent here?

Thanks!

Jon

2 answers

6 votes

Hi @Jon Boerner

It's not possible to configure multiple pipelines for different subdirectories but there are a few workarounds that might work for your use case:

  1. Configure multiple parallel steps with each one running the build for a specific subdirectory. Note that every step will of course run on every commit but this might be acceptable. https://confluence.atlassian.com/bitbucket/parallel-steps-946606807.html
  2. Adopt a branch workflow whereby each subdirectory project is developed on a different branch. This will allow you to configure a branch pipeline for each subdirectory: https://confluence.atlassian.com/bitbucket/branch-workflows-856697482.html
  3. As part of the build script itself, use the Bitbucket API to determine which subdirectory build script(s) to run based off the files that were changed in a given commit. https://developer.atlassian.com/bitbucket/api/2/reference/resource/repositories/%7Busername%7D/%7Brepo_slug%7D/diff/%7Bspec%7D

Regards

Sam

@StannousBaratheon 

I like option #3 but am curious: Since the "pipeline", i.e. Bitbucket's execution of my build script as a step, needs to run in order to check whether to run the actual building of the subproject, in essence, isn't the Bitbucket pipeline running? In other words, I still have to run at least a portion of the pipeline to determine whether running the whole pipeline, on a particular subproject, is necessary. Is this understanding correct? Lastly, in order to detect whether to run the whole build pipeline on a subproject, it will use build minutes simply performing the check as to whether to build fully or not. Is that correct?

I imagine you'd always want a pipeline to run, the first part of the script just determines what build script to execute. For example you might have a project:

/
|
|-- bitbucket-pipelines.yml <- determine which build script to run in here
|
|
|-- Project A
| |
| |-- build.sh
|
|
|-- Project B
|
|-- build.sh

In other words there isn't a pipeline per project, just a single pipeline that determines which build script to run. It's an imperfect workaround but possibly suitable for some users. I personally prefer option #2 as it allows you to track a subproject's build  status by branch.

Makes sense, I am also just confirming that at the point where "determine which build script to run in here" executes, the pipeline is already running to your point, it's now only a matter of what else to execute, i.e. Project A's build.sh or Project B's. In other words, build minutes are already being increased the moment the pipeline starts regardless of whether anything is actually built or not.

That's correct

Like Michael Brizic likes this

@StannousBaratheon if you prefer option #2, could you give more clarification ? How do you manage the deployment for different environment ? Does it mean if we have 3 services and 3 environments, then we need to have 9 branches assumed that we want to do automatic deployment from branch to cloud environment? 

@Chandara Chea A single pipeline can deploy to multiple environments. For example if you have 3 environments, your pipeline might consist of 4 steps:

  1. Build your service
  2. Deploy to the test environment
  3. Deploy to the staging environment
  4. Deploy to production

With this in mind, if you have 3 services in a mono-repo that you want to build and deploy separately you could have 3 branches (one for each service) and a branch pipeline for each branch that performs the build and deploys to all 3 environments as described above.

The documentation on Bitbucket deployments explains how to configure deployment environments for your pipelines: https://support.atlassian.com/bitbucket-cloud/docs/set-up-bitbucket-deployments/

 

Automatic vs manual deployments are determined by the step's "trigger" property. Step will run automatically by default. If you wish to perform a manual deployment simply add `trigger: manual` to the relevant deployment step.

 

Bitbucket pipelines has recently released a new feature called conditional steps that provides a new option for builds in mono-repos. This option allows you to specify if a step should run based on a changeset pattern. With this approach a single main branch can be used for all services. The pipeline would then contain a conditional step for each service that builds the service whenever a file in that service is changed. Please see the following blog post for more information about conditional steps: https://bitbucket.org/blog/conditional-steps-and-improvements-to-logs-in-bitbucket-pipelines

 

I wouldn't necessarily advocate one approach over the other. It's really incumbent on the project teams to understand the pros and cons of each and decide which works best for them.

Like rgomis likes this

We were using monorepo wtih Gradle as build tool and bitbucket pipelines as ci tool.

 

In short, there is one "main" pipeline defined for master branch and multiple custom pipelines per service (separate project in some folder). Main pipeline is starting shell script which contains logic for resolving which services (projects) were changed and trigger corresponding pipeline via REST API.

 

See setup showcase here:

https://github.com/zladovan/monorepo

Suggest an answer

Log in or Sign up to answer
TAGS
Community showcase
Published in Bitbucket

Calling any interview participants for Bitbucket Data Center

Hi everyone,  We are looking to learn more about development teams’ workflows and pain points, especially around DevOps, integrations, administration, scale, security, and the related challeng...

485 views 5 4
Read article

Community Events

Connect with like-minded Atlassian users at free events near you!

Find an event

Connect with like-minded Atlassian users at free events near you!

Unfortunately there are no Community Events near you at the moment.

Host an event

You're one step closer to meeting fellow Atlassian users at your local event. Learn more about Community Events

Events near you