jira version:4.4.1 of about 40,000 issues now(and is increasing),and about 1000users.
jira server hardware environment:
Windows Server 2008 R2 Enterprise 64bit, Intel Xeon E5520 @2.27GHz ,16 of CPU, 12GB of RAM
and set Maximum memory pool to 1GB ,Initial memory pool to 256MB.Thread stack size is not set.
(https://confluence.atlassian.com/display/JIRAKB/OutOfMemory+Errors+Due+to+Running+Out+of+Native+Thread+Limitation ) says caused by RESTLET ,but i can't find any log contain "
(https://confluence.atlassian.com/display/FISHKB/Fix+Out+of+Memory+errors+by+increasing+available+memory#FixOutofMemoryerrorsbyincreasingavailablememory-OutOfMemoryError:unabletocreatenewnativethread) says it may caused by the size of the stack per thread
anyone can help ?
log is as below
2013-02-16 10:34:09,513 http-8079-28 ERROR [500ErrorPage.jsp] Exception caught in 500 page unable to create new native thread java.lang.OutOfMemoryError: unable to create new native thread at java.lang.Thread.start0(Native Method) at java.lang.Thread.start(Thread.java:640) at com.sun.jndi.ldap.Connection.<init>(Connection.java:218) at com.sun.jndi.ldap.LdapClient.<init>(LdapClient.java:118) at com.sun.jndi.ldap.LdapClientFactory.createPooledConnection(LdapClientFactory.java:46) at com.sun.jndi.ldap.pool.Connections.<init>(Connections.java:97) at com.sun.jndi.ldap.pool.Pool.getPooledConnection(Pool.java:114) at com.sun.jndi.ldap.LdapPoolManager.getLdapClient(LdapPoolManager.java:310) at com.sun.jndi.ldap.LdapClient.getInstance(LdapClient.java:1572) at com.sun.jndi.ldap.LdapCtx.connect(LdapCtx.java:2652) at com.sun.jndi.ldap.LdapCtx.<init>(LdapCtx.java:293) at com.sun.jndi.ldap.LdapCtxFactory.getUsingURL(LdapCtxFactory.java:175) at com.sun.jndi.ldap.LdapCtxFactory.getLdapCtxInstance(LdapCtxFactory.java:134) at com.sun.jndi.url.ldap.ldapURLContextFactory.getObjectInstance(ldapURLContextFactory.java:35) at javax.naming.spi.NamingManager.getURLObject(NamingManager.java:584) at javax.naming.spi.NamingManager.processURL(NamingManager.java:364) at javax.naming.spi.NamingManager.processURLAddrs(NamingManager.java:344) at javax.naming.spi.NamingManager.getObjectInstance(NamingManager.java:316) at com.sun.jndi.ldap.LdapReferralContext.<init>(LdapReferralContext.java:93) at com.sun.jndi.ldap.LdapReferralException.getReferralContext(LdapReferralException.java:132) at com.sun.jndi.ldap.LdapNamingEnumeration.hasMoreReferrals(LdapNamingEnumeration.java:339) at com.sun.jndi.ldap.LdapNamingEnumeration.hasMoreImpl(LdapNamingEnumeration.java:208) at com.sun.jndi.ldap.LdapNamingEnumeration.hasMoreReferrals(LdapNamingEnumeration.java:362) at com.sun.jndi.ldap.LdapNamingEnumeration.hasMoreImpl(LdapNamingEnumeration.java:208)
This is indeed a strange issue because if you were running a 32bit JVM this issue would be more understandable, but you appear to have a fully provisioned server and your instance is running on a 64bit JVM, which means we're going to need to dig a bit deeper into this issue to understand what is happening here. So, can I ask you to open a ticket with Support, including a copy of your logs, so that we can dig deeper into what is happening here?
All the best,
Maximum memory pool to 1GB = increase that to 2Gb. I suppose you're talking about the heap (-Xmx)
How many threads are configured per container ? It is the default (150) ? If not, you may carefully want to lower this number. Windows has no official limit on threads, but over 1024 threads per process behaves very bad. If you accidentaly changed this to a very high number it can also explain it.
How many threads are configured per container ? It is the default (150) ? If not, you may carefully want to lower this number
1. sorry of my poor knowledge of thread and process ,can you tell me how to get many threads per progress in the server?
2. do i also have to set size of the stack per thread to 512k ?
Connect with like-minded Atlassian users at free events near you!Find a group
Connect with like-minded Atlassian users at free events near you!
Unfortunately there are no AUG chapters near you at the moment.Start an AUG
We're bringing product updates and pro tips on teamwork to ten cities around the world.Save your spot