We have one very huge svn repository which we need to scan in fisheye. We do not want to scan the repository from scratch as it will take a huge amount of time. The team has agreed to scan only recent data (from last one week).
To achieve this, we have setup the fisheye repository and provided the initial revision in advance settings, however, when we start the scan it gets stuck at some revisions and keep on scanning the same indefinitely.
Could you please guide us, how we should set this up. Please let us know if you need more information.
Are you able to determine the size of these initial revisions? For example, are there any huge files committed? How many files are committed in each of these initial revisions?
Also, do you see any errors in the FishEye log files?
Maybe you could consider reducing the Block Size parameter, as it controls how many revisions FishEye will pull down from the repository in one batch. Larger values can reduce the time it takes for FishEye to scan your repository for changes, but use more memory. Smaller values can reduce the amount of memory FishEye uses during scans. The default is 400. However, this requires a repository restart.
If you already heard about Smart Commits in Bitbucket, know that you just stumbled upon something even better (and smarter!): Genius Commits by Better DevOps Automation for Jira Data Center (+ Server...
Connect with like-minded Atlassian users at free events near you!Find an event
Connect with like-minded Atlassian users at free events near you!
Unfortunately there are no Community Events near you at the moment.Host an event
You're one step closer to meeting fellow Atlassian users at your local event. Learn more about Community Events