We have JIRA 5.1.1 upgraded long back( on solaris 10).. Atleast once in day JIRA will not be accessible. i have checkd the cpu usage and it is normal.. we have to restart the JIRA service again to work. I am not able to find any info in logs? Anyone facing this problem? Please help..
There is absolutely no way to tell without reading the log files and possibly improving monitoring so that you can find patterns of usage.
You really do need to look for warnings and erros in the logs at the time that it fails/stops. Even if there is very little, share them here.
Hi Nic
Thank you for the fast reply :)This time I will capture the errors when it hangs..and post it here.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
May you can post the previous errors in the log file which stopped your JIRA. You can find it here -
JIRA logs - <jira_home>/log/atlassian-jira.log
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Hi please find the log from jira-home directory
(DelegatingPluginFilter.java:66) (JWDSendRedirectFilter.java:25) (DelegatingPluginFilter.java:74) (IteratingFilterChain.java:42) (ServletFilterModuleContainerFilter.java:77) (ServletFilterModuleContainerFilter.java:63) (ApplicationFilterChain.java:235) (ApplicationFilterChain.java:206) (ChainedFilterStepRunner.java:78) (ApplicationFilterChain.java:235) (ApplicationFilterChain.java:206) (AbstractCachingFilter.java:33) (AbstractHttpFilter.java:31) (ApplicationFilterChain.java:235) (ApplicationFilterChain.java:206) (AbstractEncodingFilter.java:41) (AbstractHttpFilter.java:31) (PathMatchingEncodingFilter.java:49) (AbstractHttpFilter.java:31) (ApplicationFilterChain.java:235) (ApplicationFilterChain.java:206) (ActiveRequestsFilter.java:346) (ActiveRequestsFilter.java:463) (ActiveRequestsFilter.java:173) (ApplicationFilterChain.java:235) (ApplicationFilterChain.java:206) (JiraStartupChecklistFilter.java:75) (ApplicationFilterChain.java:235) (ApplicationFilterChain.java:206) (MultiTenantServletFilter.java:91) (ApplicationFilterChain.java:235) (ApplicationFilterChain.java:206) (ChainedFilterStepRunner.java:78) (ApplicationFilterChain.java:235) (ApplicationFilterChain.java:206) (StandardWrapperValve.java:233) (StandardContextValve.java:191) (StandardHostValve.java:127) (ErrorReportValve.java:102) (StandardEngineValve.java:109) (AccessLogValve.java:554) (CoyoteAdapter.java:298) (Http11Processor.java:859) (Http11Protocol.java:588) (JIoEndpoint.java:489)
at java.lang.Thread.run(Thread.java:662)
Caused by: com.atlassian.sal.api.net.ResponseConnectTimeoutException: The host did not accept the connection within timeout of 10000 ms
at com.atlassian.sal.core.net.HttpClientRequest.executeAndReturn(HttpClientRequest.java:311)
at com.atlassian.plugins.rest.module.jersey.JerseyRequest.executeAndReturn(JerseyRequest.java:161)
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Looks like there might be network issues with connections to other systems. We probably need more of the log though - that looks like one chunk of it and the bit at the top is missing.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.