You're on your way to the next level! Join the Kudos program to earn points and save your progress.
Level 1: Seed
25 / 150 points
Next: Root
1 badge earned
Challenges come and go, but your rewards stay with you. Do more to earn more!
What goes around comes around! Share the love by gifting kudos to your peers.
Keep earning points to reach the top of the leaderboard. It resets every quarter so you always have a chance!
Join now to unlock these features and more
The Atlassian Community can help you and your team get more value out of Atlassian products and practices.
Every step costs at minimum 55secs due to the git clone.
We need a way to declare when we want the clone and artifacts operations occur.
I recommended in a ticket that the following be converted to bash scripts which would be injected into the container:
- git_clone(depth)
- artifact_save([]globs)
- artifact_restore(step_name)
How would I use it?
Like this:
with opt-in yaml key:
`git: [manual|auto (default)]`
using proposed bash scripts:
- `bitbucket_checkout depth`
- `git checkout at commit with depth configured`
- `bitbucket_artifact_save glob glob glob`
- saves all files found matching glob(s)
- `bitbucket_artifact_restore step_name`
- restores all globbed files from step_name
pipelines: default: - step: image: our-image-based-on-node:8-alpine name: setup
git: manual script: - | bitbucket_checkout full npm run prod bitbucket_artifact_save ./client/build/**/* ./cilent/build/docs/**/* - step: image: atlassians-awscli-image name: publish
git: manual script: - | ls -al ./ bitbucket_artifact_restore setup aws s3 sync --delete ./client/build/ s3://our-bucket/releases/$BITBUCKET_BUILD_NO/ - step: image: cloudfoundries-concourse-ci-slack-resource-image name: notify
git: manual script: - | echo "complex json object" | envsubst | /opt/resource/out
If a step required files from the commit, my options here are :
- use artifacts to persist it through the pipeline
- run a git checkout
I estimate this would save roughly 40-50 secs for each step.
if you wanted to be even more cautious about what you're putting into peoples containers you could provide optional key like so:
bitbucket_api_location: /opt/bitbucket
Then we'd change the above calls to bitbucket_checkout, bitbucket_artifact_save bitbucket_artifact_restore would change to: /opt/bitbucket/bitbucket_checkout /opt/bitbucket/bitbucket_artifact_save /opt/bitbucket/bitbucket_artifact_restore
As to your comment about needing to perform a git clone in order to know what to do:
I'd imagine that the whole pipeline yaml file is provided out of band separate from the git clone.
Really though, I invite you to see how circleci does this: https://circleci.com/docs/2.0/configuration-reference/#checkout
So as I understand it - you have subsequent steps of a multiple step pipeline that only need the artifacts from the previous step, not the checkout, or sometimes don't even need the artifacts, they just execute commands?
Thus the close and artefact retrieval is time wasted for you?
Yeah being able to decide what impact the step execution time cost is going to be a huge positive.
We've been able to mitigate the huge risk of npm installing assets to use gulp, browserify and sass by baking all those npm modules into a docker image we use, but when the team saw the unavoidable 55sec cost of each step, there was dismay.
I know there'll be an avoidable delay of container warmup for each step. but having these bitbucket* commands mounted at a configurable path would help speed everyone's build times up.