Hello, we using SDC in test stage and need to test it capacity of CPU loading and serving requests.
We have pushed big repo onto it (13GB), and run git clone from 100 hosts. This cause SCM Cache that cached everything and test is not very good, cos it test only for SCM Caching.
Is it possible to simulate real loading on SCM Cache (random hit/miss)? How you do Stash performance testing in Atlassian?
Thanks for posting your question. I am going to do my best to explain what we, at Atlassian, did to verify that Stash DC provides benefits to enterprises. We mixed a number of highly used operations, which we determined to be representative of a large segment of our enterprise customers. We found that the resource utilisation of the Stash Web GUI operations was dwarfed by hosting operations. Build server polling and shallow clones made up a significant portion of the traffic hitting a typical Stash instance.
With clones you are going to test one of two things, (1) how well Scm Cache serves cached pack files and / or (2) how much CPU, memory and disk git uses to create pack files. Doing 100 clones of a 13 gig repository is not going to yield any insights into Stash DC performance. I don't think it is going to give useful information for server / cluster sizing. When performing hosting operations, Stash basically copies buffers from a git process to a network socket. Stash adds authentication, permissions, overload protection and clone caching on top of git.
I will explain why doing 100 clones is not going to give you any insights into the capacity of your Stash DC cluster. For 100 concurrent clones Stash uses 400 Java threads and 100 git processes will be spawned, each of which may in return spawn pack threads equal to the number of CPUs in your machine. So you could look at 400+(100*CPU count) threads all demanding system resources. With all Stash's overload protection limits removed this will probably result in system overload or degraded performance. The test is not representative of anything Stash will encounter in day to day operation.
What I would recommend is to examine your current Stash installation's access logs to calculate the number of pushes, clones and other hosting operations you are currently servicing. Scale this number with your projected growth and using that information you can set up an experiment to stress Stash DC with a similar mix of operations to determine if your hardware can cope with the projected load. Increase the load to the point where you saturate your instance. That will allow you to quite accurately calculate your headroom.
You can periodically invalidate Scm Cache's cache of a repository by doing a push. Whenever you push to a repository, Scm Cache has to clear the caches for that repository because it has been changed.
Please let me know if you require more information,
Felix, the gatling is not that i need, i guess. I tried it, but it test localhost Stash. We need to stress test Stash Throttling option on SDC (run git fetch or something else from 100 hosts, not from single localhost). We need to see how load balancing work and how throttling is increased.
Bitbucket Pipelines helps me manage and automate a number of serverless deployments to AWS Lambda and this is how I do it. I'm building Node.js Lambda functions using node-lambda ...
Connect with like-minded Atlassian users at free events near you!Find a group
Connect with like-minded Atlassian users at free events near you!
Unfortunately there are no AUG chapters near you at the moment.Start an AUG
You're one step closer to meeting fellow Atlassian users at your local meet up. Learn more about AUGs
We're bringing product updates and pro tips on teamwork to ten cities around the world.Save your spot