Problem with git-upload-pack


We have happily been using Stash for almost 5 months, with no real issues.  However, I have just had to reboot our Stash Server, and since doing so, Stash has become almost unresponsive.

When digging into it, we are seeing very high CPU usage by the git-upload-pack process. This seems to be triggered when we try to start a build from TeamCity.  This process has now been running for over 40 minutes, and doesn't show any sign of stopping.  I have read that the need for running this process is to clean up loose objects in the repository, before sending over the wire to clients, like TeamCity.  Is my understanding here correct.  If so, is there anything that can be done to improve this?

The system, as is, is almost unusable sad

I have read also about using git gc to clean up loose objects in the repository, which should make the need for an extensive git-upload-pack less, if this is the case, how can I run this on Stash?  Should I also run it on the cloned repository in the TeamCity working folder?

Any guidance you can provide in getting this resolved would be greatly appreciated!



5 answers

1 accepted

1 vote
Accepted answer

Hi Gary,

Thanks for reviewing this.

I think we need to analyse what is going on with your instance and we need to collect data on your environment for that. Could you please raise an issue at and quote this question there?


Thiago Bomfim

2 votes

Hi Gary,

Please have a look at the documentation below:

If your CI is polling constantly Stash, turning on ref advertisement and making sure you are caching for SSH and HTTP might help you.

Best regards,
Thiago Bomfim
Atlassian Support




Before posting here I saw that article, but I didn't see anything there that would directly help me, unless I missed something.  Also, the need to scale concerns me.  We really only have 1 repository in Stash, which is being used by two developers (this is a prototype setup to see how things will work).  Do I really need to scale up my server for this sort of setup?

Can you point me in the direction of how to setup and configure:

  • Ref advertisement
  • HTTP caching

We are not using SSH.

Also, any thoughts on the last part of my question, i.e. with regard to cleaning up the git repository?  Is this something that we should actively be doing, or is this something that Stash is doing for us?




Apologies, on initial glance, I thought you had referred to this article:

Let me try the suggestions in the article you linked to.



This morning, I have been through the article that you mention, and I have set up the settings that make sense to me:

  • HTTP Caching enabled
  • Ref Advertisement is enabled
  • I have increased the TTL for packs to 8 hours

I have also upgraded Stash from 3.1.1. to 3.5.0, however, none of this seems to have made a difference sad

Having just started TeamCity and Stash again, I am still now seeing a very high CPU usage when triggering a build.  I started a build at 08:58 and immediately CPU usage on Stash server jumped to 80-90% and after about 1 hour 20 minutes, the CPU usage returned to normal.  During this time, the TeamCity build had failed due to timeout, but starting it again, resulted in a successful build.

Subsequent builds then also work as I would like them to, i.e. reasonably fast.  I can only assume that they are now using the pack file which was generated during the first failed build.

I am still concerned about the length of the time that the first build takes.  I can "fix" this by scheduling a build to run every morning before anyone comes into the office, and by the time that is finished, we should be able to use the cached pack file during our working day.  This really doesn't seem like a great solution though.  Can you suggest anything that would take down the length of time to create the initial pack file?



Hi Gary, Can you give us some details re: * The size of the repository that the TeamCity server is cloning * The machine you are running Stash on (e.g. available CPUs, RAM, disk)

Thanks for the suggestion. I am now working with the support team to try to resolve this. Gary

Suggest an answer

Log in or Sign up to answer
Community showcase
Published Nov 06, 2018 in Bitbucket

Upgrade Best Practices

Hello! My name is Mark Askew and I am a Premier Support Engineer for products Bitbucket Server/Data Center, Fisheye & Crucible. Today, I want to bring the discussion that Jennifer, Matt, and ...

1,952 views 7 10
Read article

Atlassian User Groups

Connect with like-minded Atlassian users at free events near you!

Find a group

Connect with like-minded Atlassian users at free events near you!

Find my local user group

Unfortunately there are no AUG chapters near you at the moment.

Start an AUG

You're one step closer to meeting fellow Atlassian users at your local meet up. Learn more about AUGs

Groups near you