Showing results for 
Search instead for 
Did you mean: 
Sign up Log in

Earn badges and make progress

You're on your way to the next level! Join the Kudos program to earn points and save your progress.

Deleted user Avatar
Deleted user

Level 1: Seed

25 / 150 points

Next: Root


1 badge earned


Participate in fun challenges

Challenges come and go, but your rewards stay with you. Do more to earn more!


Gift kudos to your peers

What goes around comes around! Share the love by gifting kudos to your peers.


Rise up in the ranks

Keep earning points to reach the top of the leaderboard. It resets every quarter so you always have a chance!


Come for the products,
stay for the community

The Atlassian Community can help you and your team get more value out of Atlassian products and practices.

Atlassian Community about banner
Community Members
Community Events
Community Groups

Can I run multiple and parallel pipelines passing parameters defined in the pipeline yaml file?

The requirement is to be able to configure multiple sets of variables (essentially version numbers of tools) to be used by a standard script within the pipeline that will then build and tag multiple images, each with a single version number of those tools (the image building code already accepts the variables and so can build a specific combination and tag the images).

So, when/if we add a new version number of a tool, that image will then be built.

This is similar to the environment feature within Travis CI.

A more specific example:

We build an AWS AMI base image. We need to build images that support PHP+NodeJS+Nginx. We need to build images for each combination of all the different versions of those tools. We add a new version to the list of versions, and a new series of images are built. If the base image (Amazon Linux 2) is updated, then that will be a build of all the images. We expect around 50 combinations.

By doing this, we allow our developers to pick the combinations for their projects and have 1 less thing to worry about.

The current approach is to only support the latest version of everything and so 1 pipeline can do all of that. Add a scheduled run and we're covered (that's what we're doing at the moment).

But now we need to have multiple and parallel versions available and updateable.

We could manually write out the pipeline yaml file, but then that will almost certainly lead to errors.

Is there anything available within BitBucket's pipelines that can get us close to a solution?

One suggestion that we had in the company was if a pipeline could launch multiple/parallel builds that would certainly work for us.

We think not as the BitBucket pipelines seems to be static in their configuration.

The next suggestion was to have a pre-processor that works on changes to a committed config of some sort that generates the pipeline yaml file. That would be one way to do things, but would result in an extra commit if the pre-processor was in the pipeline. There doesn't seem to be git hook support for BitBucket Cloud.


So. What suggestions/options should I explore further?


1 answer

0 votes

Hi @Richard Quadling,

It is possible to use parallel steps in pipelines:

It is also possible to configure multiple custom pipelines and schedule a run for each one.

I'm not sure I understand your requirement with regards to variables though, I'd like to ask for clarification to make sure I understand your use case:

  • How do you use variables now? Do you have them defined as repo or workspace variables? Or do you define them in the script of your bitbucket-pipelines.yml file

  • You mention that right now you have 1 pipeline doing everything, and I assume it uses the variables you have defined.
    Is the requirement to add e.g. a 2nd and 3rd pipeline that will run the same set of commands, but with different values for these variables?

Kind regards,

Suggest an answer

Log in or Sign up to answer

Atlassian Community Events