You're on your way to the next level! Join the Kudos program to earn points and save your progress.
Level 1: Seed
25 / 150 points
1 badge earned
Challenges come and go, but your rewards stay with you. Do more to earn more!
What goes around comes around! Share the love by gifting kudos to your peers.
Keep earning points to reach the top of the leaderboard. It resets every quarter so you always have a chance!
Join now to unlock these features and more
The Atlassian Community can help you and your team get more value out of Atlassian products and practices.
Basically, I don't want pipeline step clone code on next steps, only first step will clone the source code a time. Another reason is if step clone the source code (and doesn't use the source code from previous) the built code will be lost.
I known that the bitbucket pipeline has the artifacts feature but seems it only store some parts of the source code.
The flow is:
Step 1: Clone source code. Step 2: Run in parallel two steps, one install node modules at root folder, one install node module and build js, css at app folder. Step 3: Will deploy the built source code from step 2.
Here is my bitbucket-pipelines.yml
image: node:11.15.0 pipelines: default: - step: name: Build and Test script: - echo "Cloning..." artifacts: - ./** - parallel: - step: name: Install build clone: enabled: false caches: - build script: - npm install - step: name: Install app clone: enabled: false caches: - app script: - cd app - npm install - npm run lint - npm run build - step: name: Deploy clone: enabled: false caches: - build script: - node ./bin/deploy definitions: caches: app: ./app/node_modules build: ./node_modules
Hi @Son Le and welcome to the community.
I'm afraid that this is not possible at the moment.
Every step in in a yml file runs a separate Docker container. We had a request from another user regarding the same issue (their use case was different than yours, but the request was essentially the same):
I am quoting Matt Ryall's response in that request:
We looked at the idea of copying the clone between steps, but it was in fact slower. Because we want to be able to resume steps at any point in the next week (to support manual steps), any data passed between steps needs to be on persistent storage (S3 for us). Uploading data to S3 between steps is costly and time-consuming, and resulted in the pipeline running slower than if we cloned from Bitbucket at the start of each step. So we minimise the data passed between steps to just the declared artifacts.
Regarding what you mentioned here:
Another reason is if step clone the source code (and doesn't use the source code from previous) the built code will be lost.
You can make use of artifacts to pass any files you generate during a step to the next step:
If not all the files you generate are stored in artifacts, please let me know and we can look further into it.
We'd need to know:
- the files you want to save as artifacts and their place in the directory structure
- how you define artifacts in your yml file
- which files are missing from the next steps
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.