We have almost 600 Subversion repositories, some that are 30 GB or more in size. The last time we tried to support FishEye the scanning was taking over a month and we decided it wasn't viable.
I'm wondering if Atlassian has considered supporting the use of Hadoop to improve scanning performance? Revisions seem like a unit of work that could be distributed to various nodes for processing.
Other than Hadoop, does anyone have other suggestions for improving the performance?
Atlassian has not responded so presumably it isn't possible.
Bah, accidentally deleted my comment. Be sure that all your repos are structured in the way it likes: https://answers.atlassian.com/questions/19281/how-can-i-reduce-the-size-of-the-fisheye-indexes
I ended up writing something that automatically generates the exclusion rules.
30Gb repos doesn't tell us much - if it's binary files fisheye doesn't care, if it's metadata it does.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Some minor hints (svn)
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
We were using file:// URLs and the repositories were stored on SAN, so that wasn't the issue. Thanks though.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
FWIW - anything other than file:// access is pretty much a non-starter for real life svn repos.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Be sure all your repos are structured in the way that fisheye likes: https://answers.atlassian.com/questions/19281/how-can-i-reduce-the-size-of-the-fisheye-indexes . I ended up writing something to automatically generate the exclusions.
30Gb repos doesn't really tell us anything useful. It could be binary files, in which case it makes no odds to FE, or metadata, in which case it will kill it.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Execellent ! I am truly impressed that there is so much about this subject that has been revealed and you did it so nicely with so considerably class.and visit more http://hadooptraininginhyderabad.co.in/
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
There's feature request for 'scanning agent' -- https://jira.atlassian.com/browse/FE-1988 -- vote for it if you like it.
So far the only 'distributed scanning solution' is to spin up an aux instance and do the initial scanning there.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Hi Justin,
We also met some problem with Big repositories.
We have over 300 repos, and each one is bigger than 10G.
We are also interesting about your idea of "using Hadoop".
Do you know how to use Haddoop on Fisheye/Crucible?
How Hadoop will improve the scanning performance?
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
As far as I know it isn't possible. I was hoping Atlassian would weigh in on the possibility.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
I found http://confluence.atlassian.com/display/FISHEYE/Best+Practices+for+FishEye+Configuration for general performance tips. I'm still interested in the idea of using Hadoop though. Atlassian?
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.