Come for the products,
stay for the community

The Atlassian Community can help you and your team get more value out of Atlassian products and practices.

Atlassian Community about banner
4,369,168
Community Members
 
Community Events
168
Community Groups

Improve FishEye repository scanning performance using Hadoop or something else

We have almost 600 Subversion repositories, some that are 30 GB or more in size. The last time we tried to support FishEye the scanning was taking over a month and we decided it wasn't viable.

I'm wondering if Atlassian has considered supporting the use of Hadoop to improve scanning performance? Revisions seem like a unit of work that could be distributed to various nodes for processing.

Other than Hadoop, does anyone have other suggestions for improving the performance?

7 answers

1 accepted

0 votes
Answer accepted

Atlassian has not responded so presumably it isn't possible.

2 votes
JamieA Rising Star May 15, 2012

Bah, accidentally deleted my comment. Be sure that all your repos are structured in the way it likes: https://answers.atlassian.com/questions/19281/how-can-i-reduce-the-size-of-the-fisheye-indexes

I ended up writing something that automatically generates the exclusion rules.

30Gb repos doesn't tell us much - if it's binary files fisheye doesn't care, if it's metadata it does.

1 vote

Some minor hints (svn)

  • http over https improved the performance a lot
  • svn also has a direct file access mechanism, if you can afford to access the svn server's disks directly from the fisheye servers

We were using file:// URLs and the repositories were stored on SAN, so that wasn't the issue. Thanks though.

JamieA Rising Star May 15, 2012

FWIW - anything other than file:// access is pretty much a non-starter for real life svn repos.

JamieA Rising Star May 15, 2012

Be sure all your repos are structured in the way that fisheye likes: https://answers.atlassian.com/questions/19281/how-can-i-reduce-the-size-of-the-fisheye-indexes . I ended up writing something to automatically generate the exclusions.

30Gb repos doesn't really tell us anything useful. It could be binary files, in which case it makes no odds to FE, or metadata, in which case it will kill it.

Execellent ! I am truly impressed that there is so much about this subject that has been revealed and you did it so nicely with so considerably class.and visit more http://hadooptraininginhyderabad.co.in/

0 votes

There's feature request for 'scanning agent' -- https://jira.atlassian.com/browse/FE-1988 -- vote for it if you like it.

So far the only 'distributed scanning solution' is to spin up an aux instance and do the initial scanning there.

Hi Justin,


We also met some problem with Big repositories.

We have over 300 repos, and each one is bigger than 10G.

We are also interesting about your idea of "using Hadoop".

Do you know how to use Haddoop on Fisheye/Crucible?

How Hadoop will improve the scanning performance?

As far as I know it isn't possible. I was hoping Atlassian would weigh in on the possibility.

I found http://confluence.atlassian.com/display/FISHEYE/Best+Practices+for+FishEye+Configuration for general performance tips. I'm still interested in the idea of using Hadoop though. Atlassian?

Suggest an answer

Log in or Sign up to answer
TAGS

Atlassian Community Events