Create
cancel
Showing results for 
Search instead for 
Did you mean: 
Sign up Log in

Is there a way of sharing data between steps in bitbucket pipelines

Bram Wiekens April 19, 2018

I'm currently trying to use Bitbucket pipelines to deploy a docker container to Kubernetes. I've read the example that's already out there, but it doesn't completely fit our needs.

The step of building the container and putting it in it's registry works fine from the pipelines, as well is the deployment step that connects to Kubernetes to create a deployment, but not yet completely.

Because the build step knows the name of the container, which is constructed by Gradle, sharing this information to the deployment step would be convenient, so the kubectl commands can be executed using this information. Is there a way of sharing information between these steps?

2 answers

1 accepted

6 votes
Answer accepted
davina
Atlassian Team
Atlassian Team members are employees working across the company in a wide variety of roles.
April 19, 2018

Hi @Bram Wiekens

A way you can share data between steps is using artifacts which is configured in your bitbucket-pipelines.yml You could create a file that saves the data, and then pass it on. This is currently the only thing that is passed to and from steps that can be accessed.

Hope this helps!

Bram Wiekens April 19, 2018

Thank you @davina,that certainly helped give some direction for an alternative solution, which now works. I write a little file now during the build process and pass this file on as an artifact to the deployment step.

The only issue I had was to remember that the artifacts use relative paths to be shared so "/data/**" won't work while "data/**" will work. Other than that this is a working solution.

Deleted user September 1, 2018

Is there an example of how this works anywhere? How do you know the artifact's filename? Or path? Will it be automatically compressed? I don't see this documented anywhere.

Like Damien JARRY likes this
Gareth Stephenson March 13, 2019

In your bitbucket-pipelines.yaml file, use something like this example:

pipelines:
  branches:
    master:
      - step:
        caches:
          - node
        script:
          - npm install
          - npm run fulltest
          - npm run get-version --silent > ./version.txt
        artifacts:
          - version.txt
      - step:
        script:
          - VERSION=$(cat ./version.txt)

 

The above configuration runs npm install, runs a full set of tests (custom npm task), and then uses get-version (a custom npm task to get the version of the package) and places the SemVer number into a file called version.txt.

This file is then stored as an artifact, which allows you to open it up in the next step (via cat, for example) and store it into a variable to be used further down in the step's script.

Like # people like this
HigorrSenna September 18, 2021

Very good, thanks for this!

Roland Gritzer January 12, 2022

The problem with that solution is that artifacts get deleted after 2 Weeks. After that you cannot deploy anymore from that pipeline without starting a new build. In our case this would rollback the version in staging.

naveen.kummari April 15, 2022

Hey, @davina  Imagine if I want to pass deployment variables from one step to another, I add them to a text/bash/env file and send it as an artifact. The important data can be leaked through the artifact. One solution is to encrypt in one step and decrypting in another.

if we ignore that, still, in the build log, bitbucket extracts and shows the contents of the variables.  I want to know is there a way to set variables received from the artifact to secured so bitbucket treats it as a secured variable and hides it in logs. Thanks. 

Like # people like this
1 vote
Sabin.Ranjit April 26, 2020

hi,
asking for help!

Im in a weird issue where I use artifacts to export 2 file (.txt and .tar) from the 1st step and I am not being able to get the .tar in the 2nd step. I can access the .txt file but not the .tar file. In both steps I use different docker images to process the data.


Is there anything to do with the docker image and artifacts exported in the 1st step? Asking this question because, if I use some different docker image in the 2nd step I can see the .tar file available.

Suggest an answer

Log in or Sign up to answer
TAGS
AUG Leaders

Atlassian Community Events