I'm currently trying to use Bitbucket pipelines to deploy a docker container to Kubernetes. I've read the example that's already out there, but it doesn't completely fit our needs.
The step of building the container and putting it in it's registry works fine from the pipelines, as well is the deployment step that connects to Kubernetes to create a deployment, but not yet completely.
Because the build step knows the name of the container, which is constructed by Gradle, sharing this information to the deployment step would be convenient, so the kubectl commands can be executed using this information. Is there a way of sharing information between these steps?
A way you can share data between steps is using artifacts which is configured in your bitbucket-pipelines.yml You could create a file that saves the data, and then pass it on. This is currently the only thing that is passed to and from steps that can be accessed.
Hope this helps!
Thank you @davina,that certainly helped give some direction for an alternative solution, which now works. I write a little file now during the build process and pass this file on as an artifact to the deployment step.
The only issue I had was to remember that the artifacts use relative paths to be shared so "/data/**" won't work while "data/**" will work. Other than that this is a working solution.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Is there an example of how this works anywhere? How do you know the artifact's filename? Or path? Will it be automatically compressed? I don't see this documented anywhere.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
In your bitbucket-pipelines.yaml file, use something like this example:
pipelines:
branches:
master:
- step:
caches:
- node
script:
- npm install
- npm run fulltest
- npm run get-version --silent > ./version.txt
artifacts:
- version.txt
- step:
script:
- VERSION=$(cat ./version.txt)
The above configuration runs npm install, runs a full set of tests (custom npm task), and then uses get-version (a custom npm task to get the version of the package) and places the SemVer number into a file called version.txt.
This file is then stored as an artifact, which allows you to open it up in the next step (via cat, for example) and store it into a variable to be used further down in the step's script.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Very good, thanks for this!
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
The problem with that solution is that artifacts get deleted after 2 Weeks. After that you cannot deploy anymore from that pipeline without starting a new build. In our case this would rollback the version in staging.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Hey, @davina Imagine if I want to pass deployment variables from one step to another, I add them to a text/bash/env file and send it as an artifact. The important data can be leaked through the artifact. One solution is to encrypt in one step and decrypting in another.
if we ignore that, still, in the build log, bitbucket extracts and shows the contents of the variables. I want to know is there a way to set variables received from the artifact to secured so bitbucket treats it as a secured variable and hides it in logs. Thanks.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
hi,
asking for help!
Im in a weird issue where I use artifacts to export 2 file (.txt and .tar) from the 1st step and I am not being able to get the .tar in the 2nd step. I can access the .txt file but not the .tar file. In both steps I use different docker images to process the data.
Is there anything to do with the docker image and artifacts exported in the 1st step? Asking this question because, if I use some different docker image in the 2nd step I can see the .tar file available.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.