Come for the products,
stay for the community

The Atlassian Community can help you and your team get more value out of Atlassian products and practices.

Atlassian Community about banner
4,360,193
Community Members
 
Community Events
168
Community Groups

Reusing steps and executing them for all folders in a directory

I have a scenario whrein I have following folder hierarchy - each folder containing an AWS Lambda to be deployed

 

root_dir\
             \lambda1
             \lambda2

Common step (YAML Anchor) is:

definitions:
  steps:
    - step: &buildAndDeployLambda
        name: build image
        image:
          name: python:3.7.3
        caches:
          - docker
        services:
          - docker
        script: # build an image
          - ls -ltr
          - pip3 install awscli
          - source ./deployment/config.sh
          ....

Deployment steps for lambdas are *similar*, with some variable name differences, like the lambda name, etc. I can create a YAML anchor with a common step. As YAML Anchors don't support overriding variables, I circumvented this by adding a config.sh file containing lambda name in each folder, which is sourced as a script in the common step.

I need assistance with two things -

  1. How to run individual folders so that they can be deployed individually in a manual manner? How will the shared step get the folder name to cd into in the below calls?
pipelines:
  custom:
    my-lambda-1:
      - step: *buildAndDeployLambda
        <<:
        deployment: dev
        name: Deploying Lambda 1

my-lambda-2:
      - step: *buildAndDeployLambda
        <<:
        deployment: dev
        name: Deploying Lambda 2

     

     2. How to parallelize the above deployment, so that multiple such folders are deployed using a common step in a parallel manner?

 

Thanks    

1 answer

0 votes

Hi Sandeep and welcome to the community!

You could make use of a YAML anchor and deployment environments and variables.

This could work as follows:

1. On Bitbucket website, open the repo where you have this pipeline, go to its Repository settings > Deployments and create two environments, named e.g. Dev1 and Dev2.

For each one of these environments, create a variable with the name directory, and assign it the respective value that corresponds to the name of the directory in your repo.

E.g. for the environment Dev1, create a variable named directory, with the value lambda1.
For the environment Dev2, create a variable named directory, with the value lambda2.

It's important that this variable has the same name for both environments.

2. The YAML file can look as follows:

definitions:
steps:
- step: &buildAndDeployLambda
name: build image
image:
name: python:3.7.3
caches:
- docker
services:
- docker
script: # build an image
- ls -ltr
- pip3 install awscli
- cd $directory
- <more commands here>

pipelines:
custom:
my-lambda-1:
- step:
<<: *buildAndDeployLambda
deployment: Dev1
name: Deploying Lambda 1
my-lambda-2:
- step:
<<: *buildAndDeployLambda
deployment: Dev2
name: Deploying Lambda 2

Note that each of the steps has a different deployment environment.
my-lambda-1 uses deployment Dev1, so when the step is executed the $directory variable will have the value lambda1 that you defined for this deployment environment.

my-lambda-2 uses deployment Dev2, so when the step is executed the $directory variable will have the value lambda2 that you defined for this deployment environment.

I'm not sure if and how this would work with your script /deployment/config.sh, because I don't know what exactly your script is doing.
Does the above work for you, without the need to use an extra script?

If you need more variables that are different for each lambda, you can also define additional ones with the same name in each deployment environment, but give a different value depending on the environment, and then use them in your script like I used $directory above.

 

Regarding your second question and how to parallelize these steps:

Custom pipelines cannot be parallel. If you use the default pipeline or pipelines based on branches, you could use parallel steps. Please note though that the first step of the parallel set cannot be manual.

If you need the steps to be both parallel and manual, the only way to work around this would be to add an extra dummy step as the first one, you can disable clone for that step, and do a simple echo, e.g.

pipelines:
default:
- parallel:
- step:
clone:
enabled: false
script:
- echo "Hello"
- step:
<<: *buildAndDeployLambda
deployment: Dev1
trigger: manual
name: Deploying Lambda 1
- step:
<<: *buildAndDeployLambda
deployment: Dev2
trigger:manual
name: Deploying Lambda 2

Please note that in this case the first step (that essentially does nothing other than an echo) will get triggered with every commit to the repo. You will need to manually trigger the deployment for the 2nd and 3rd steps, from the Pipelines build that was initated by the first step.


If you have any questions, please feel free to let me know.

Kind regards,
Theodora

There's an issue with this - this seems to be using 'deployment' keyword as a 'hack' - we are here creating separate 'environments' Dev1/Dev2.../DevN to handle/represent all directories. It seems its because there's no way in Bitbucket cloud to have variables in YAML anchors and override those variables at calling point.

The above will work, but its a lot of hack to make do something simple and obvious. Why can't bitbucket provide scoped variables in YAML Anchors and allow them to be overridden. In that case, at the calling point, one can just override the variable, just like you are overriding the 'deployment' keyword.

Hi @Sandeep.Pathak,

Indeed, I'm afraid that it is not possible at the moment to set variables in yaml anchors that can be overridden. My suggestion is a way to work around this issue.

We have a feature request in our issue tracker which I believe is what you are asking:

I would suggest adding your vote and feedback in that feature request. Although it has been closed due to inactivity, our product managers continue to monitor requests (including closed ones), and if there is demand for this feature our team may reconsider.

Kind regards,
Theodora

Suggest an answer

Log in or Sign up to answer
TAGS
Community showcase
Published in Bitbucket

Git push size limits are coming to Bitbucket Cloud starting April 4th, 2022

Beginning on April 4th, we will be implementing push limits. This means that your push cannot be completed if it is over 3.5 GB. If you do attempt to complete a push that is over 3.5 GB, it will fail...

3,470 views 3 10
Read article

Atlassian Community Events