We plan to use the JIRA Subversion Plugin 0.10.7 on Jira 4.4.5. The enterprise has 600 SVN repositories, with various project sizes.
It is possible to use the JIRA Subversion Plugin with such a large scale ?
We use it with 1400 repositories, a jira repo is automatically created for any new svn repo.
In short, it works fine for us, the svn index is only 40-60Mb or something. However I changed the plugin substantially... Retrieving the propertysets was extremely slow on oracle so I made lots of optimisations there.
The main change was to the indexing though. After the first time a repo is indexed, it gets the last changeset it saw by querying the lucene index for the highest changeset associated with a jira issue. This was no good for us, as we have repos with 50-100k revs that have no associated jira issues (imported from clearcase and going back 20 years, way before jira existed). These repos get continually scanned every 10 mins starting from rev 1. So I modified it to store the most recent scanned rev in the database.
The unfortunate side effect of this is that you can't just blow away the index and have it reindex from scratch, you also need to reset the counters in the db, but this is a small price to pay.
So, to summarise, it really depends on the number of revs in your repos, and whether all of them have commits related to jira. In our case it was unusable as it is delivered, but the amount of work to change it wasn't that huge.
Fisheye also has its own problems btw (huge understatement). We use fisheye too, both plugins were modded so that the svn one is suppressed if there is a an application link for the project, and the fisheye one suppressed if not.
Hi guys!,
After reading the problems of the Atlassian plug-in maybe you might be interested in this fork:
https://bitbucket.org/pbeltranl/jira-subversion-plugin-plus
Regarding the problems reported by you, the fork fixes these:
@Jamie ...as we have repos with 50-100k revs that have no associated jira issues (imported from clearcase and going back 20 years, way before jira existed). These repos get continually scanned every 10 mins starting from rev 1.
The fork indexes all the subversion commits, so if no any new commit has been added since the latest scan it does nothing (only a minimal check) and returns.
@Luke... Fisheye would be able to manage this better due to you being able to allow it to use multiple threads on the indexing of repositories so should there be a large change-set other repositories should still be updated.
The fork also scans multiple repsitories in paralell. It relies on SVNKit to do that. Another advantage is that SVNKIt looks like uses a sort of very optimized streaming to fetch commits from the Subversion server with a nice balance among network, memory and disk usage. (I guess the same engine used by FishEye, but I really don't know it).
Other advantages are:
As it forks the coming 0.10.12 version of the Atlassian's plug-in it is an V2 plug-in, hence it does not requires re-start the server. Simply, install it on JIRA as any other V2 plug-in. Well, as mentioned this is a temporary advantange until Atlassian releases the new version.
Lucene has been replaced by an Apache Derby database, and as the fork indexes the full history of the repositories, it would be very easy to build a reporting system around Subversion integrated with JIRA by using standard SQL.
Another advantage is that after any revision has been indexed, Subversion is not longer required for it. For example,the Atlassian's implementation invokes Subversion in order to get the changed paths EVERYTIME that a revision is going to be shown on JIRA.
Only the Indexer has been replaced, so the rest is fully compatible with your current configuration. The same user interface form users and administrators, etc. Your current repository configuration is also compatible. So, if you want to gice it a chance, revome your current plug-in and install the fork and wait for your repositores be indexed.
The only one disadvantage:
it might require a LOT of disk space compared to the Atlassian's one (because the whole history is indexed rather thann the commits related to JIRA issues only).
-------
This is the link to the built plug-in on the Marketplace:
https://marketplace.atlassian.com/1211294
which is broken until Atlassian approves it. In the meanwhile you have to built it from the sources.
You guys look like an experts in large repositories amounts, so I encourage you to give the fork a chance ;)
Pablo.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
...It requires JIRA 5, but if you need to work on JIRA 4, I could support it too.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
The only problem I can see with this is if these are large repositorie, the poll check could end up running continuously depending on the time taken to gather updates, by the amount of repositories you have, this only has to take 1 second per repository if your poll inteval is set to be every hour. This would obviously cause performance issues on your Jira instance.
Fisheye would be able to manage this better due to you being able to allow it to use multiple threads on the indexing of repositories so should there be a large change-set other repositories should still be updated.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Even though I do not have a direct experience with Jira Subversion Plugin for such amount of repositories, I guess, it would be better to use Fisheye/Crucible to the repository management and link Jira and Fisheye together rather than overloading Jira with the polling of the logs of all these repositories.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.