Inital import of repository to Crucible using light Fisheye fails with java.util.ConcurrentModificationException

I’m having problems trying to do the initial import of my perforce depot into Crucible using light Fisheye. (This is version fecru-2.10.3.) It’s a very large repository, so I’m starting only a few months back and using hundreds of exclude patterns to get it down to a manageable size. (Without the exclude patterns, I run out of memory before the import finishes on my machine with 16GB of RAM.)

After Crucible using light Fisheye finally downloads all the file names, I get this error in the logs:



2013-04-11 21:13:14,112 ERROR [InitialPinger1 try11] fisheye BaseRepositoryScanner-handleSlurpException - Problem processing revisions from repo try11 due to class java.util.ConcurrentModificationException - null
java.util.ConcurrentModificationException
at java.util.TreeMap$PrivateEntryIterator.nextEntry(TreeMap.java:1117)
at java.util.TreeMap$ValueIterator.next(TreeMap.java:1162)
at com.cenqua.fisheye.perforce.client.P4Client.addFileInfo(P4Client.java:281)
at com.cenqua.fisheye.perforce.client.P4Client.addChangeFileInfo(P4Client.java:197)
at com.cenqua.fisheye.perforce.P4Scanner.createInitialImport(P4Scanner.java:851)
at com.cenqua.fisheye.rep.RepositoryScanner.processRevisions(RepositoryScanner.java:132)
at com.cenqua.fisheye.rep.BaseRepositoryScanner.slurpRepository(BaseRepositoryScanner.java:254)
at com.cenqua.fisheye.rep.BaseRepositoryScanner.doSlurpTransaction(BaseRepositoryScanner.java:221)
at com.cenqua.fisheye.rep.BaseRepositoryScanner.ping(BaseRepositoryScanner.java:180)
at com.cenqua.fisheye.rep.BaseRepositoryEngine.doSlurp(BaseRepositoryEngine.java:92)
at com.cenqua.fisheye.rep.RepositoryEngine.slurp(RepositoryEngine.java:382)
at com.cenqua.fisheye.rep.ping.OneOffPingRequest.doRequest(OneOffPingRequest.java:28)
at com.cenqua.fisheye.rep.ping.PingRequest.process(PingRequest.java:58)
at com.cenqua.fisheye.rep.RepositoryHandle.processPingRequests(RepositoryHandle.java:198)
at com.cenqua.fisheye.rep.RepositoryHandle.access$100(RepositoryHandle.java:50)
at com.cenqua.fisheye.rep.RepositoryHandle$2.run(RepositoryHandle.java:156)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:636)



And then Crucible using light Fisheye starts the import all over again from the beginning.

What’s going on? And, more importantly, how can I get past this?

3 answers

How many threads do you use for initial scan? What is the FECRU heap set to? There is no way a scan would consume 16 gigs unless something is seriously wrong.

0 votes

This looks like a bug so please raise a support request at support.atlassian.com

I raised support request CRC-5275 . It’s not publickly viewable at this time, though.

Hopefully this comment appears under Eddie Webb’s comment. There’s no link to comment on a comment.

Addressing your comment back to front:

I don’t know if something is seriously wrong. I do know that it’s a very big repository. Before I used the hundreds of exclude patterns, I kept getting errors like

2013-04-07 02:16:02,769 ERROR [InitialPinger1 try11] fisheye BaseRepositoryScanner-handleSlurpException - Problem processing revisions from repo try11 due to class java.lang.OutOfMemoryError -

Java heap space

java.lang.OutOfMemoryError: Java heap space

which would abort the initial import. I used FISHEYE_OPTS to gradually increase the Java heap, eventually getting to “-Xmx12288m” before I gave up. I logged all the perforce commands, and I could tell that it got further and further into the repository as I increased the memory.

With the exclude patterns, though, I have FISHEYE_OPTS set to “-Xmx4096m” and things go fine until I get the java.util.ConcurrentModificationException.

As to the number of threads I use for initial scan: I never set it. I assume it was whatever the default was. I just checked http://localhost:8060/admin/viewServerSettings.do and it said under Resource Limits that Initial Indexing Threads was not set and Incremental Indexing Threads was not set. I just changed them both to 1 and restarted the server. It’s trying to do the initial import again. We can see what happens in another 8 hours or so.

Some other data from http://localhost:8060/admin/sysinfo.do :

Resource Limits - initial threads: 1 (max)

Resource Limits - incremental threads: 1 (max)

Database Type: MySQL

Database Driver: com.mysql.jdbc.Driver

Database Url: jdbc:mysql://localhost:3306/fisheye

Database Version: 80

JDBC Pool (min): 5

JDBC Pool (max): 20

JDBC Pool (effective max): 18

JDBC Pool (partitions): 3

JDBC Pool (max per partition): 6

Suggest an answer

Log in or Sign up to answer
Community showcase
Published Monday in Jira Ops

Jira Ops Early Access Program Update #1: Announcing our next feature and a new integration

Thanks for signing up for Jira Ops! I’m Matt Ryall, leader for the Jira Ops product team at Atlassian. Since this is a brand new product, we’ll be delivering improvements quickly and sharing updates...

376 views 0 8
Read article

Atlassian User Groups

Connect with like-minded Atlassian users at free events near you!

Find a group

Connect with like-minded Atlassian users at free events near you!

Find my local user group

Unfortunately there are no AUG chapters near you at the moment.

Start an AUG

You're one step closer to meeting fellow Atlassian users at your local meet up. Learn more about AUGs

Groups near you