I have a problem with pass env variable from one step, to a step that triggers another pipeline in another repo . After extracting VERSION I need to trigger another pipeline.
I think it's just a common case to pass some variable from parent step to next step, but no information on how to do it . In trigger-pipeline I can't run any script steps before I trigger another pipeline pipeline example:
- step:
name: upload to test
image:
name: ci:latest
script:
- bin=`ls | grep .bin`
- export VERSION=${bin%.*}
- aws s3 sync . s3://somebacketname/test/
- step:
name: testing
trigger: manual
script:
- pipe: atlassian/trigger-pipeline:4.1.5
variables:
BITBUCKET_USERNAME: $USER
BITBUCKET_APP_PASSWORD: $PASSWORD
REPOSITORY: 'test'
BRANCH_NAME: 'master'
CUSTOM_PIPELINE_NAME: 'critical-test'
WAIT: 'true'
PIPELINE_VARIABLES: >
[{
"key": "DESIRED_VERSION",
"value": "$VERSION"
},
{
"key": "DURATION",
"value": "15"
}]
Hi Andrey,
I'm afraid that it is not possible to pass environment variables that you define in one step to the next steps, which is why the value for VERSION is not available in the second step.
Ways to work around this issue would be:
1) Execute the pipe in the same step where the variable is created
2) Write the value of this variable to a file, define that file as an artifact, and source the file in the next step.
You can check the first comment/reply in the feature request we have about this https://jira.atlassian.com/browse/BCLOUD-15849, it includes an example of how to do this.
Please feel free to let me know if you have any questions.
Kind regards,
Theodora
@Theodora Boudale
1) How ? with Curl ? so for what native method to trigger other pipelines?
2) How I can extract artifacts or write any line if the script in the step of trigger pipeline :/ ?
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Hi Andrey,
When you define a file as an artifact in one Pipelines step, the file becomes available for the next steps. This can be useful for files that are generated during the build. (If you want to read more about artifacts you can check this documentation page).
If you'd like to try the 2nd workaround I mentioned, the part of the yaml file that you posted here could be modified as follows:
- step:
name: upload to test
image:
name: ci:latest
script:
- bin=`ls | grep .bin`
- echo export VERSION=${bin%.*} >> build.env
- aws s3 sync . s3://somebacketname/test/
artifacts:
- build.env
- step:
name: testing
trigger: manual
script:
- source build.env
- pipe: atlassian/trigger-pipeline:4.1.5
variables:
BITBUCKET_USERNAME: $USER
BITBUCKET_APP_PASSWORD: $PASSWORD
REPOSITORY: 'test'
BRANCH_NAME: 'master'
CUSTOM_PIPELINE_NAME: 'critical-test'
WAIT: 'true'
PIPELINE_VARIABLES: >
[{
"key": "DESIRED_VERSION",
"value": "$VERSION"
},
{
"key": "DURATION",
"value": "15"
}]
Kind regards,
Theodora
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.