Hello Atlassian Community,
I'm currently experiencing challenges with the migration assistant while attempting to migrate a small-sized Confluence dataset (approximately 40 spaces). During the migration process, I've observed a significant increase in the thread count of the JVM, rising from around 200 to over 1100 native threads. Consequently, the data migration is encountering partial failure with the error "java.lang.OutOfMemoryError: unable to create new native thread."
How can you make the migration assistant use less worker threads?
I don't think there is any setting for it.
I know the migration assistants do several things in parallel so it doesn't surprise me it spawns multiple threads (but 900 seems like too much eh).
What's the heap setting currently? On step 12 here they recommend at least 4GB for heap: https://support.atlassian.com/migration/docs/confluence-pre-migration-checklist#12.-Check-your-Heap-Allocation-
Thank you for your response. As expected, configuring the Java Heap to 4GB or even 6GB did not resolve the issue. However, I found a solution that worked for me:
I realized that there are various undocumented settings that can potentially reduce thread usage. In my case, adding the following lines to `setenv.sh` proved to be effective:
CATALINA_OPTS="-XX:ActiveProcessorCount=1 ${CATALINA_OPTS}" CATALINA_OPTS="-Dmax.concurrent.initiate.space.import.requests=1 -Dmax.step.execution.threads=1 ${CATALINA_OPTS}"
Please note that the ActiveProcessorCount setting is beneficial only if the migration log displays the message "info CPU Statistics are not enabled, will use system CPU." In my case, the system CPU count was 16. Therefore, I believe that ActiveProcessorCount had a substantial impact.
With the aforementioned settings, the native thread count was approximately 950.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.