Reindexing all jira issues fails with ClosedByInterruptException

We have a JIRA instance (JIRA 5.2.11) with a service implemented to re-index all the issues (by calling issueIndexManager.reIndexAll()) that runs once every night. The service fails some times with the error below:

com.atlassian.jira.util.RuntimeIOException: java.nio.channels.ClosedByInterruptException
    at com.atlassian.jira.index.WriterWrapper.commit(WriterWrapper.java:136)
    at com.atlassian.jira.index.DefaultIndexEngine$WriterReference.commit(DefaultIndexEngine.java:220)
    at com.atlassian.jira.index.DefaultIndexEngine$FlushPolicy$2.commit(DefaultIndexEngine.java:60)
    at com.atlassian.jira.index.DefaultIndexEngine$FlushPolicy.perform(DefaultIndexEngine.java:84)
    at com.atlassian.jira.index.DefaultIndexEngine.write(DefaultIndexEngine.java:154)
    at com.atlassian.jira.index.DefaultIndex.perform(DefaultIndex.java:32)
    at com.atlassian.jira.index.QueueingIndex$Task.index(QueueingIndex.java:144)
    at com.atlassian.jira.index.QueueingIndex$Task.run(QueueingIndex.java:125)
    at java.lang.Thread.run(Thread.java:722)
Caused by: java.nio.channels.ClosedByInterruptException
    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202)
    at sun.nio.ch.FileChannelImpl.read(FileChannelImpl.java:679)
    at org.apache.lucene.store.NIOFSDirectory$NIOFSIndexInput.readInternal(NIOFSDirectory.java:161)
    at org.apache.lucene.store.BufferedIndexInput.readBytes(BufferedIndexInput.java:139)
    at org.apache.lucene.store.BufferedIndexInput.readBytes(BufferedIndexInput.java:94)
    at org.apache.lucene.store.IndexOutput.copyBytes(IndexOutput.java:176)
    at org.apache.lucene.index.CompoundFileWriter.copyFile(CompoundFileWriter.java:235)
    at org.apache.lucene.index.CompoundFileWriter.close(CompoundFileWriter.java:201)
    at org.apache.lucene.index.DocumentsWriter.flush(DocumentsWriter.java:598)
    at org.apache.lucene.index.IndexWriter.doFlush(IndexWriter.java:3524)
    at org.apache.lucene.index.IndexWriter.flush(IndexWriter.java:3489)
    at org.apache.lucene.index.IndexWriter.prepareCommit(IndexWriter.java:3352)
    at org.apache.lucene.index.IndexWriter.commitInternal(IndexWriter.java:3425)
    at org.apache.lucene.index.IndexWriter.commit(IndexWriter.java:3407)
    at org.apache.lucene.index.IndexWriter.commit(IndexWriter.java:3391)
    at com.atlassian.jira.index.WriterWrapper.commit(WriterWrapper.java:132)
    ... 8 more

 

Due to the error above the filters and dashboards do not return all the issues as expected due to indexes being partially broken. The next overnight re-index fixes them, provided it finishes without any errors. We are experiencing this error while re-indexing, 2-3 times a month, which is very annoying for the users.

Any help will be greatly appreciated.

Thanks.

 

2 answers

4 votes

This is an old thread, but for anyone who comes across it or if it's still happening to you Benu, we have found a bug in JIRA from a race condition:

https://jira.atlassian.com/browse/JRA-41409

/me clicks "bookmark" icon.

Oh yes!

0 vote

Argh! Why are you indexing every night? That is screaming that you have a completely broken installation.  Stop. Doing. That. Now. You need to look at why you think want to do this because it is wrong.  Then, sack the person who set it up and the manager who authorised it because they were wrong, and then fix the real problem.

Thanks for you response Nic.

The Jira instance that we are experiencing this problem with, is the largest of all and has over 3 million issues. The indexing service was set up a few years back due to 'Index timed out' issues being experienced at that time when the instance (JIRA 3.x) was just over 1 million issues. Since the instance had to be restarted 3-4 times day, that meant a lot of broken issues with stale statuses in search with no clue as to which ones were broken and the only option to fix them was to re-index all issues which could not be done during the day.

Now that its a bit more stable, I can definitely look at reducing the indexing frequency. But that does not explain the 'ClosedByInterruptException' that happens during the re-index all intermittently.

 

 

Something is interrupting the thread that is writing data to the Lucene index. In the long run, you'll need to do some quite detailed analysis on the root causes of that - taking thread dumps while it is running for a start, to see what is going on, increase monitoring on the disks, check your hardware is appropriate and so-on. In the short term, stop indexing all the time. Wait until the next successful run and then turn it off. You simply shouldn't be doing it and all you're doing is taking a good index and destroying it when the error occurs. For 3 million issues, you really should be looking at Jira data centre as well, standalone isn't going to cut this.

Yes, the thread is being interrupted (https://issues.apache.org/jira/browse/LUCENE-4638), but I am not sure what is causing that interruption. I will continue to look for its root cause. We are looking at reducing the indexing frequency with next release. We are also looking at the Jira Data Centre option that requires us to upgrade. Thanks.

Suggest an answer

Log in or Join to answer
Community showcase
Sarah Schuster
Posted Jan 29, 2018 in Jira

What are common themes you've seen across successful & failed Jira Software implementations?

Hey everyone! My name is Sarah Schuster, and I'm a Customer Success Manager in Atlassian specializing in Jira Software Cloud. Over the next few weeks I will be posting discussion topics (8 total) to ...

3,317 views 14 20
Join discussion

Atlassian User Groups

Connect with like-minded Atlassian users at free events near you!

Find a group

Connect with like-minded Atlassian users at free events near you!

Find my local user group

Unfortunately there are no AUG chapters near you at the moment.

Start an AUG

You're one step closer to meeting fellow Atlassian users at your local meet up. Learn more about AUGs

Groups near you
Atlassian Team Tour

Join us on the Team Tour

We're bringing product updates and pro tips on teamwork to ten cities around the world.

Save your spot