Hello, I have Stash 3.10 for a past few days we are experiencing very slow performance when browsing repos pull/merge requests.
Currently we have ~900 users, 2 cpu (cpu usage doesn't seem to be high), git version 1.9.4, connect to 2014 mssql, jdbc driver 4.0, java 1.8.0_73
Server was running on 8gigs of ram, we increased to 16, now we have ~6gigs available, so you can say that there was a little lack of memory, but it did not solve performance issues.
8gigs are allocated for jvm with min/max memory args.
Looking ant profiler logs, you can relate "snowiness" to auth?
2017-11-10 07:36:41,263 | *hidden* | *hidden* | *hidden* | *hidden*
[288ms] - "GET /rest/api/1.0/projects/*hidden*/repos/*hidden*/pull-requests/1/activities HTTP/1.1"
[174ms] - Authentication org.springframework.security.authentication.AuthenticationProvider.authenticate(Authentication)
[146ms] - attemptAuthentication - com.atlassian.stash.stash-authentication:crowdHttpAuthHandler
[146ms] - StashUser com.atlassian.stash.internal.user.CaptchaService.authenticateWithCaptcha(CaptchaTicket,UncheckedOperation)
[146ms] - StashUser com.atlassian.stash.user.UserService.authenticate(String,String)
Any suggestions how debug more on this issue?
Some general rules for scaling which you might already follow, but to rule out certain problems that might exist;
Regarding CPU usage.. much of the heavy lifting is delegated to Git. As a result, when deciding on the required hardware to run Bitbucket/Stash, the CPU usage of the Git processes is the most important factor to consider. And, as is the case for memory usage, cloning large repositories is the most CPU intensive Git operation. When you clone a repository, Git on the server side will create a pack file (a compressed file containing all the commits and file versions in the repository) that is sent to the client. While preparing a pack file, CPU usage will go up to 100% for one CPU.
Encryption (either SSH or HTTPS) will have a significant CPU overhead if enabled. As for which of SSH or HTTPS is to be preferred, there's no clear winner, each has advantages and disadvantages as described in the following table.
The size of the database required for Bitbucket Server depends in large part on the number of repositories and the number of commits in those repositories.
A very rough guideline is: 100 + ((total number of commits across all repos) / 2500) MB.
So, for example, for 20 repositories with an average of 25,000 commits each, the database would need 100 + (20 * 25,000 / 2500) = 300MB.
Even though you may not see high CPU usage, supporting 900 users on 2 CPUs is not going to work. What's more, by allocating a heap of 8 GB to Stash you may have made things worse - usually Stash works fine with the default memory allocation, so I would strong suggest that you add more CPUs (6 at least, so you get 8 in total) and that you reduce the minimum and maximum heap size to no more than 2 GB.
And last, but not least, I would strongly suggest upgrading to Bitbucket Server. :-)
Premier Support Engineer
This community is celebrating its one-year anniversary and Atlassian co-founder Mike Cannon-Brookes has all the feels.Read more
Bitbucket Pipelines helps me manage and automate a number of serverless deployments to AWS Lambda and this is how I do it. I'm building Node.js Lambda functions using node-lambda ...
Connect with like-minded Atlassian users at free events near you!Find a group
Connect with like-minded Atlassian users at free events near you!
Unfortunately there are no AUG chapters near you at the moment.Start an AUG
You're one step closer to meeting fellow Atlassian users at your local meet up. Learn more about AUGs