A lot of git for-each-ref

Why our stash have follow every minute (it fork git for-each-ref and kill them)?

16261 stash 20 0 23432 9812 1028 R 24 0.0 0:00.71 /usr/bin/git for-each-ref --sort=-objecttype --format=%(objecttype)%02%(refname)%02%(objectname)%02
16266 stash 20 0 20784 7112 976 R 22 0.0 0:00.65 /usr/bin/git for-each-ref --sort=-objecttype --format=%(objecttype)%02%(refname)%02%(objectname)%02
16277 stash 20 0 20232 6580 976 R 20 0.0 0:00.61 /usr/bin/git for-each-ref --sort=-objecttype --format=%(objecttype)%02%(refname)%02%(objectname)%02
16276 stash 20 0 19836 6056 976 R 19 0.0 0:00.58 /usr/bin/git for-each-ref --sort=-objecttype --format=%(objecttype)%02%(refname)%02%(objectname)%02
16272 stash 20 0 19572 5792 976 R 18 0.0 0:00.55 /usr/bin/git for-each-ref --sort=-objecttype --format=%(objecttype)%02%(refname)%02%(objectname)%02
16284 stash 20 0 19572 5776 976 R 17 0.0 0:00.52 /usr/bin/git for-each-ref --sort=-objecttype --format=%(objecttype)%02%(refname)%02%(objectname)%02
16263 stash 20 0 19572 5788 976 R 17 0.0 0:00.50 /usr/bin/git for-each-ref --sort=-objecttype --format=%(objecttype)%02%(refname)%02%(objectname)%02
16271 stash 20 0 19572 5792 976 R 16 0.0 0:00.49 /usr/bin/git for-each-ref --sort=-objecttype --format=%(objecttype)%02%(refname)%02%(objectname)%02
16267 stash 20 0 19572 5792 976 R 16 0.0 0:00.47 /usr/bin/git for-each-ref --sort=-objecttype --format=%(objecttype)%02%(refname)%02%(objectname)%02
16269 stash 20 0 19572 5792 976 R 15 0.0 0:00.46 /usr/bin/git for-each-ref --sort=-objecttype --format=%(objecttype)%02%(refname)%02%(objectname)%02
16274 stash 20 0 19572 5788 976 R 15 0.0 0:00.45 /usr/bin/git for-each-ref --sort=-objecttype --format=%(objecttype)%02%(refname)%02%(objectname)%02
16268 stash 20 0 19572 5788 976 R 13 0.0 0:00.40 /usr/bin/git for-each-ref --sort=-objecttype --format=%(objecttype)%02%(refname)%02%(objectname)%02
16270 stash 20 0 19572 5788 976 R 13 0.0 0:00.40 /usr/bin/git for-each-ref --sort=-objecttype --format=%(objecttype)%02%(refname)%02%(objectname)%02

1 answer

1 accepted

0 vote

Hi Alexey,

What were the circumstances under which you saw this activity? Was it just after a restart? Or perhaps a cluster node re-joining the cluster?

Were all the processes you show running concurrently? Or were only a few running at a time?

There are a few possible reasons, but the most likely is tag re-indexing. Bitbucket Server keeps a cache of tags, so it can annotate commits in the user interface with and associated tags.

It does this in a fairly low impact way; it processes your repositories serially so only a single "git for-each-ref" will be be running at a given time. Also this command is quite lightweight and on a typical repository will complete in a few 100 milliseconds at most. However for instances with large numbers of repositories tag re-indexing could easily run tens of minutes before all repositories are re-indexed.

I hope this helps explain what you are seeing. Certainly if this is causing problems with your instance please raise a support request at https://support.atlassian.com


It fork and kill that processes permanently all time. It not really load CPU, but it interest why Stash run them again and again. And i never seen that Stash stop running that processes.

Hi Alexey, There are a few different ways you can try and see what activities are causing these commands to be run, but perhaps the easiest is to enable profiling and then look for instances of "for-each-ref" in the atlassian-bitbucket-profiler.log (or atlassian-stash-profiler.log) log file.

Also, I was going to ask.... how do you know Stash is killing the processes? If the process runs for more than 60s (or perhaps 120s?) it will be killed and an error will be logged in atlassian-bitbucket.log: "An error occurred while executing an external process: process timed out" Since that particular command should complete very quickly, it would be abnormal to see this for instances of "git for-each-ref"

> how do you know Stash is killing the processes? strace to some of them show that process was terminated by SIGTERM. The processes is running really short time, less that minute.

The root of this problem% we have plugin that continue check each opened pull request. It have method to find user who have permission to merge that pull request. Inside of this method it has invocation to com.atlassian.stash.repository.ref.restriction.RefRestrictionService#canWrite. Exactly this method generated dozen of checks and git for-each-ref. And it work more that minutes for each invocation.

Suggest an answer

Log in or Join to answer
Community showcase
Piotr Plewa
Published Dec 27, 2017 in Bitbucket

Recipe: Deploying AWS Lambda functions with Bitbucket Pipelines

Bitbucket Pipelines helps me manage and automate a number of serverless deployments to AWS Lambda and this is how I do it. I'm building Node.js Lambda functions using node-lambda&nbsp...

714 views 0 4
Read article

Atlassian User Groups

Connect with like-minded Atlassian users at free events near you!

Find a group

Connect with like-minded Atlassian users at free events near you!

Find my local user group

Unfortunately there are no AUG chapters near you at the moment.

Start an AUG

You're one step closer to meeting fellow Atlassian users at your local meet up. Learn more about AUGs

Groups near you
Atlassian Team Tour

Join us on the Team Tour

We're bringing product updates and pro tips on teamwork to ten cities around the world.

Save your spot