This is a general question, but specifically I want to use this pipe atlassian/aws-eks-kubectl-run to run this command.....
kubectl get cm -n keptn keptn-domain -ojsonpath={.data.app_domain}
....and store that output into a variable for use in a another pipeline step.
Is this possible? Can you show an example?
Hi @Rob Jahn , I don't think this is possible with the current version of the pipe. We'll try to add such feature in the nearest future and let you know.
Yes, something similar is done in the aws-lambda-deploy pipe. It uses the artifacts feature to store the intermediate data between steps. You can see this guide as an example: https://confluence.atlassian.com/bitbucket/deploying-a-lambda-function-update-to-aws-967319469.html.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Thanks Alex, but I am looking to see the the pipe itself can provide outputs. Use that lambda example, say the lambda returned a Json string. I would want to put that into a variable to then access and manipulate in a secondary pipeline step like a Unix command script step.
Not sure if supported in general with pipes. For the kubectl pipe I would like to get output into string, for example from a kubectl get. And for my custom pipes, i want to expose outputs for consumption
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
@Rob Jahn I don't think it's currently possible to store the output into a variable. The only way is to store the output of the pipe as an artifact and parse that artifact in the subsequent steps.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Is there an example for this or just a code snippet example for what is needed in the pipe code for this that you can share or point me to?
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
I think I know what to do, but anything specific for the path of the file to write out? My custom pipe is Unix shell script.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
What I did was to create some file in the pipe and grab it use that file name as the artifact. Then in a subsequent step, read in that artifact file. I don't add in a folder path. The file just lives in the root folder with the checked out code.
See the discussion on this thread, for it was a similar topic: https://community.atlassian.com/t5/Bitbucket-Pipelines-questions/How-do-I-share-files-between-bitbucket-pipes/qaq-p/1321475
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
@Alexander ZhukovIt would be nice to have, in general, a mechanism like the one developed by Github Actions:
https://docs.github.com/en/actions/learn-github-actions/variables#passing-values-between-steps-and-jobs-in-a-workflow
In short - There is an file in filesystem, where are stored the environments variables. The path to the file is stored in GITHUB_ENV environment variable. And do define shared env you just need to append your env variable like:
```bash
echo "NEW_ENV=value" >> $GITHUB_EVN
```
The file can be sourced automatically before the next script will be run and it can be mounted in some way to pipe to share the values as env variables
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
We had the same issue.
The general problem we had, when we are deploying to production we need to fetch the version from staging kubernetes and deploy the same version to production.
Sound easy oh?
The ideal solution will be
- pipe: atlassian/aws-eks-kubectl-run:1.2.3
variables:
CLUSTER_NAME: "dev"
KUBECTL_COMMAND: -n amplio-staging get pods --selector=app=amplio -o jsonpath='{.items[0].spec.containers[*].image}' > ./version.txt
However, it's not possible in bitbucket pipelines without dedicated support in the pipe.
We ended with
image: bitbucketpipelines/aws-eks-kubectl-run:1.2.3
script:
- export CLUSTER_NAME="dev"
- export KUBECTL_COMMAND="get pods --selector=app=amplio -n amplio-staging -o jsonpath='{.items[0].spec.containers[*].image}'"
- python /pipe.py > version.txt
- cat version.txt
artifacts:
- version.txt
Instead of using a pipe, we decide to take the docker itself and run the script by hand.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.