Showing results for 
Search instead for 
Did you mean: 
Sign up Log in

Earn badges and make progress

You're on your way to the next level! Join the Kudos program to earn points and save your progress.

Deleted user Avatar
Deleted user

Level 1: Seed

25 / 150 points

Next: Root


1 badge earned


Participate in fun challenges

Challenges come and go, but your rewards stay with you. Do more to earn more!


Gift kudos to your peers

What goes around comes around! Share the love by gifting kudos to your peers.


Rise up in the ranks

Keep earning points to reach the top of the leaderboard. It resets every quarter so you always have a chance!


Come for the products,
stay for the community

The Atlassian Community can help you and your team get more value out of Atlassian products and practices.

Atlassian Community about banner
Community Members
Community Events
Community Groups

Pipelines is too slow

Every step costs at minimum 55secs due to the git clone.


We need a way to declare when we want the clone and artifacts operations occur.


I recommended in a ticket that the following be converted to bash scripts which would be injected into the container:


- git_clone(depth)

- artifact_save([]globs)

- artifact_restore(step_name)



Atlassian Team
Atlassian Team members are employees working across the company in a wide variety of roles.
Jul 04, 2018

Hi ZenobiusJ,

It'd be useful to better understand your use case.

If we made these operations able to be manually applied by the user, how would you use them to speed things up? What would you do differently to the current order of doing things? Would you not always need the git clone to determine what to do?


Matt Watson (Bitbucket Pipelines Development Manager)


How would I use it?

Like this:

with opt-in yaml key:

`git: [manual|auto (default)]`

using proposed bash scripts: 

- `bitbucket_checkout depth`

  - `git checkout at commit with depth configured`

- `bitbucket_artifact_save glob glob glob`

  - saves all files found matching glob(s)

- `bitbucket_artifact_restore step_name`

  - restores all globbed files from step_name

    - step:
        image: our-image-based-on-node:8-alpine
        name: setup
git: manual script: - | bitbucket_checkout full npm run prod bitbucket_artifact_save ./client/build/**/* ./cilent/build/docs/**/* - step: image: atlassians-awscli-image name: publish
git: manual script: - | ls -al ./ bitbucket_artifact_restore setup aws s3 sync --delete ./client/build/ s3://our-bucket/releases/$BITBUCKET_BUILD_NO/ - step: image: cloudfoundries-concourse-ci-slack-resource-image name: notify
git: manual script: - | echo "complex json object" | envsubst | /opt/resource/out  


If a step required files from the commit, my options here are :

- use artifacts to persist it through the pipeline

- run a git checkout


I estimate this would save roughly 40-50 secs for each step.

if you wanted to be even more cautious about what you're putting into peoples containers you could provide optional key like so:

bitbucket_api_location: /opt/bitbucket

Then we'd change the above calls to bitbucket_checkout, bitbucket_artifact_save bitbucket_artifact_restore would change to: /opt/bitbucket/bitbucket_checkout /opt/bitbucket/bitbucket_artifact_save /opt/bitbucket/bitbucket_artifact_restore

As to your comment about needing to perform a git clone in order to know what to do:

I'd imagine that the whole pipeline yaml file is provided out of band separate from the git clone.

Really though, I invite you to see how circleci does this:

Atlassian Team
Atlassian Team members are employees working across the company in a wide variety of roles.
Jul 05, 2018

So as I understand it - you have subsequent steps of a multiple step pipeline that only need the artifacts from the previous step, not the checkout, or sometimes don't even need the artifacts, they just execute commands?

Thus the close and artefact retrieval is time wasted for you?

Yeah being able to decide what impact the step execution time cost is going to be a huge positive.


We've been able to mitigate the huge risk of npm installing assets to use gulp, browserify and sass by baking all those npm modules into a docker image we use, but when the team saw the unavoidable 55sec cost of each step, there was dismay.

I know there'll be an avoidable delay of container warmup for each step. but having these bitbucket* commands mounted at a configurable path would help speed everyone's build times up.

Like # people like this


Log in or Sign up to comment
AUG Leaders

Atlassian Community Events