We have one very huge svn repository which we need to scan in fisheye. We do not want to scan the repository from scratch as it will take a huge amount of time. The team has agreed to scan only recent data (from last one week).
To achieve this, we have setup the fisheye repository and provided the initial revision in advance settings, however, when we start the scan it gets stuck at some revisions and keep on scanning the same indefinitely.
Could you please guide us, how we should set this up. Please let us know if you need more information.
Are you able to determine the size of these initial revisions? For example, are there any huge files committed? How many files are committed in each of these initial revisions?
Also, do you see any errors in the FishEye log files?
Maybe you could consider reducing the Block Size parameter, as it controls how many revisions FishEye will pull down from the repository in one batch. Larger values can reduce the time it takes for FishEye to scan your repository for changes, but use more memory. Smaller values can reduce the amount of memory FishEye uses during scans. The default is 400. However, this requires a repository restart.
Thanks for signing up for Jira Ops! I’m Matt Ryall, leader for the Jira Ops product team at Atlassian. Since this is a brand new product, we’ll be delivering improvements quickly and sharing updates...
Connect with like-minded Atlassian users at free events near you!Find a group
Connect with like-minded Atlassian users at free events near you!
Unfortunately there are no AUG chapters near you at the moment.Start an AUG
You're one step closer to meeting fellow Atlassian users at your local meet up. Learn more about AUGs