Getting Jira Cluster Error in log file

Sachin Dhamale February 7, 2017

I have Datacenter JIRA of version 6.4.9. In that I am facing indexing fail issue. Whenever i do any bulk activity on JIRA like custom field creation and all the indexes are get corrupted

I am getting following error on my log file.

2017-01-15 18:00:03,216 atlassian-scheduler-quartz1.clustered_QuartzSchedulerThread ERROR      [org.quartz.core.ErrorLogger] An error occured while scanning for the next trigger to fire.
org.quartz.JobPersistenceException: Couldn't acquire next trigger: Couldn't retrieve trigger: Transaction (Process ID 169) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction. [See nested exception: org.quartz.JobPersistenceException: Couldn't retrieve trigger: Transaction (Process ID 169) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction. [See nested exception: java.sql.SQLException: Transaction (Process ID 169) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.]]
    at org.quartz.impl.jdbcjobstore.JobStoreSupport.acquireNextTrigger(JobStoreSupport.java:2814)
    at org.quartz.impl.jdbcjobstore.JobStoreSupport$36.execute(JobStoreSupport.java:2757)
    at org.quartz.impl.jdbcjobstore.JobStoreSupport.executeInNonManagedTXLock(JobStoreSupport.java:3788)
    at org.quartz.impl.jdbcjobstore.JobStoreSupport.acquireNextTrigger(JobStoreSupport.java:2753)
    at org.quartz.core.QuartzSchedulerThread.run(QuartzSchedulerThread.java:263)
Caused by: org.quartz.JobPersistenceException: Couldn't retrieve trigger: Transaction (Process ID 169) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction. [See nested exception: java.sql.SQLException: Transaction (Process ID 169) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.]
    at org.quartz.impl.jdbcjobstore.JobStoreSupport.retrieveTrigger(JobStoreSupport.java:1596)
    at org.quartz.impl.jdbcjobstore.JobStoreSupport.retrieveTrigger(JobStoreSupport.java:1572)
    at org.quartz.impl.jdbcjobstore.JobStoreSupport.acquireNextTrigger(JobStoreSupport.java:2792)
    ... 4 more
Caused by: java.sql.SQLException: Transaction (Process ID 169) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.
    at net.sourceforge.jtds.jdbc.TdsCore.tdsErrorToken(TdsCore.java:2988)
    at net.sourceforge.jtds.jdbc.TdsCore.nextToken(TdsCore.java:2421)
    at net.sourceforge.jtds.jdbc.TdsCore.isDataInResultSet(TdsCore.java:838)
    at net.sourceforge.jtds.jdbc.JtdsResultSet.<init>(JtdsResultSet.java:149)
    at net.sourceforge.jtds.jdbc.JtdsStatement.executeSQLQuery(JtdsStatement.java:511)
    at net.sourceforge.jtds.jdbc.JtdsPreparedStatement.executeQuery(JtdsPreparedStatement.java:1029)
    at org.apache.commons.dbcp.DelegatingPreparedStatement.executeQuery(DelegatingPreparedStatement.java:96)
    at org.apache.commons.dbcp.DelegatingPreparedStatement.executeQuery(DelegatingPreparedStatement.java:96)
    at org.quartz.impl.jdbcjobstore.StdJDBCDelegate.selectTrigger(StdJDBCDelegate.java:2112)
    at org.quartz.impl.jdbcjobstore.JobStoreSupport.retrieveTrigger(JobStoreSupport.java:1578)
    ... 6 more

 

It look like its cluster related error.

Can any body tell me what is a cause of an issue,

 

Thanks,

Sachin

1 answer

0 votes
crf
Atlassian Team
Atlassian Team members are employees working across the company in a wide variety of roles.
February 7, 2017

So, you have multiple things happening, and the exception you've reported probably isn't the problem.  This is Quartz (the scheduler that JIRA used to use and that I replaced in JIRA 7) attempting to poll the database for the next job to run and being unable to because something else is updating the scheduled jobs (and possibly that exact same one) at the same time.  You can see it getting caught and gracefully handled here, so I suspect that the Quartz error itself is probably harmless.

I think you'll need to look for other error messages before anyone will be able to help.

Sachin Dhamale February 9, 2017

Thanks Chris for reply.

We have datacenter JIRA with large amount of issues around 7 lac and we have structure plugin , zephyr plugin and integration with multiple tools.

So issue is, Whenever we do some bulk changes activity or do some custom field changes it affect on searching part. we wont able to search issue properly.

Even we do re-indexing after such activity sometimes it work sometimes it doesn't.

Can you please help me on why the indexing are corrupted again and again after bulk activity. and why it doesn't work even after re indexing. Does structure / zephyr plugin responsible for this?

 

crf
Atlassian Team
Atlassian Team members are employees working across the company in a wide variety of roles.
February 9, 2017

The problem is that there could be any number of reasons for this.  Third party plugins can also contribute to problems (though I hasten to add that Structure and Zephyr have both been around for a very long time and would probably have documented or addressed any known problem like this, so I would definitely look at any other other third-party or custom plugins you might have installed first).

The Quartz exception you provided happens when it is trying to look for work to do.  It does not happen inside of a job that's actually doing work, so it cannot be the root cause of the indexing problem. Without a specific exception message that is contributing to the problem, it's difficult to know what to suggest.

  1. I would recommend that you take another look at your logs to see if there are errors reported by the actual indexing threads.  The exception you showed indicated that there was a deadlock that SQLServer chose to resolve by breaking Quartz's transaction, and the database's deadlock resolution algorithms tend to make fairly good choices, but maybe sometimes it chooses the transaction that's actually doing work to terminate instead.  Or maybe there is something else going wrong that could be found in the logs.  I can't see your logs, so I don't have any way to help you look for this.
  2. If you cannot identify anything better on your own, then it's probably best to contact support. It isn't appropriate for you include your full logs here, as it can contain things like user email addresses and other possibly business-sensentive information, but our support team has a policy in place for handling such information responsibly and they are accustomed to digging through logs to identify the root causes of problems like this.

 

Suggest an answer

Log in or Sign up to answer