Create
cancel
Showing results for 
Search instead for 
Did you mean: 
Sign up Log in

XML Export stuck and Confluence Crashes

Dominic Cardellino January 20, 2020

Confluence Version: 6.14.3

Server OS: Ubuntu 18.04

Database: Postgresql 9.6

Hi there,

 

I'm trying to export Confluence as an XML, to change Database from MySQL to PostgreSQL. After 1-2 hours of doing the export (via admin interface), Confluence crashes, because it rans out of memory.

I tried to do an mysqldump with the following parameter:

mysqldump --compatible=postgresql --default-character-set=utf8 -u<confdbuser> -p<password> --max_allowed_packet=512M --single-transaction --databases confluence > ~/confluence_pg.sql

 

Then i migrated it to an Postgres instance with pgloader. After the migration and starting up confluence connected to postgresql database, i receive the following errors:

 

An error occurred when getting the next batch for consumer type: DATA_UPLOAD. Message: org.hibernate.tool.schema.spi.SchemaManagementException: Schema-validation: wrong column type encountered in column [attachmentId] in table [MIG_ATTACHMENT]; found [numeric (Types#NUMERIC)], but expecting [int8 (Types#BIGINT)]
com.atlassian.util.concurrent.LazyReference$InitializationException: org.hibernate.tool.schema.spi.SchemaManagementException: Schema-validation: wrong column type encountered in column [attachmentId] in table [MIG_ATTACHMENT]; found [numeric (Types#NUMERIC)], but expecting [int8 (Types#BIGINT)]
at com.atlassian.util.concurrent.LazyReference.getInterruptibly(LazyReference.java:149)
at com.atlassian.util.concurrent.LazyReference.get(LazyReference.java:112)
at com.atlassian.migration.agent.store.jpa.impl.DefaultSessionFactorySupplier.get(DefaultSessionFactorySupplier.java:57)
at com.atlassian.migration.agent.store.jpa.impl.DefaultSessionFactorySupplier.get(DefaultSessionFactorySupplier.java:36)
at com.atlassian.migration.agent.store.jpa.impl.DefaultPluginTransactionTemplate.on(DefaultPluginTransactionTemplate.java:29)
at com.atlassian.migration.agent.store.tx.PluginTransactionTemplate.write(PluginTransactionTemplate.java:24)
at com.atlassian.migration.agent.queue.QueueBroker.getNextBatch(QueueBroker.java:119)
at com.atlassian.migration.agent.queue.QueueBroker.dispatchBatchToConsumer(QueueBroker.java:113)
at java.util.ArrayList.forEach(ArrayList.java:1257)
at com.atlassian.migration.agent.queue.QueueBroker.runJob(QueueBroker.java:100)
at com.atlassian.confluence.impl.schedule.caesium.JobRunnerWrapper.doRunJob(JobRunnerWrapper.java:120)
at com.atlassian.confluence.impl.schedule.caesium.JobRunnerWrapper.lambda$runJob$0(JobRunnerWrapper.java:91)
at com.atlassian.confluence.impl.vcache.VCacheRequestContextManager.doInRequestContextInternal(VCacheRequestContextManager.java:87)
at com.atlassian.confluence.impl.vcache.VCacheRequestContextManager.doInRequestContext(VCacheRequestContextManager.java:71)
at com.atlassian.confluence.impl.schedule.caesium.JobRunnerWrapper.runJob(JobRunnerWrapper.java:91)
at com.atlassian.scheduler.core.JobLauncher.runJob(JobLauncher.java:134)
at com.atlassian.scheduler.core.JobLauncher.launchAndBuildResponse(JobLauncher.java:106)
at com.atlassian.scheduler.core.JobLauncher.launch(JobLauncher.java:90)
at com.atlassian.scheduler.caesium.impl.CaesiumSchedulerService.launchJob(CaesiumSchedulerService.java:443)
at com.atlassian.scheduler.caesium.impl.CaesiumSchedulerService.executeLocalJob(CaesiumSchedulerService.java:410)
at com.atlassian.scheduler.caesium.impl.CaesiumSchedulerService.executeQueuedJob(CaesiumSchedulerService.java:388)
at com.atlassian.scheduler.caesium.impl.CaesiumSchedulerService$1.consume(CaesiumSchedulerService.java:285)
at com.atlassian.scheduler.caesium.impl.CaesiumSchedulerService$1.consume(CaesiumSchedulerService.java:282)
at com.atlassian.scheduler.caesium.impl.SchedulerQueueWorker.executeJob(SchedulerQueueWorker.java:65)
at com.atlassian.scheduler.caesium.impl.SchedulerQueueWorker.executeNextJob(SchedulerQueueWorker.java:59)
at com.atlassian.scheduler.caesium.impl.SchedulerQueueWorker.run(SchedulerQueueWorker.java:34)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.hibernate.tool.schema.spi.SchemaManagementException: Schema-validation: wrong column type encountered in column [attachmentId] in table [MIG_ATTACHMENT]; found [numeric (Types#NUMERIC)], but expecting [int8 (Types#BIGINT)]

 

Is there a correct way to migrate confluence database from mysql to postgres?

 

Kindest regards,

 

Dominic

1 answer

0 votes
JP _AC Bielefeld Leader_
Rising Star
Rising Star
Rising Stars are recognized for providing high-quality answers to other users. Rising Stars receive a certificate of achievement and are on the path to becoming Community Leaders.
January 21, 2020

Hi,

no. The only way is to create a valid xml backup from Confluence and an import using the new database.

Creating a postgresql compatible mysql dump will not work.

Did you execute the backup command directly on the server as suggested in:

https://confluence.atlassian.com/doc/manually-backing-up-the-site-152405.html

Best

JP

Dominic Cardellino January 21, 2020

I guess i tried it in the past, but i will try it again today.

Do i have to execute a curl with basic login to start the backup process locally on the server (no GUI)?

 

Best,

 

Dominic

Joseph March 7, 2020

@Dominic Cardellinodid you manage to create the xml backup properly? I am having the same problem :(

Accessing the node as direct as I can would be http://node1:8090/admin/backup.action -- the server itself is without a GUI and I didn't find a way to trigger the export in the CLI. Unfortunately this ends in Java Out-Of-Memory errors pretty fast and I need to restart Confluence manually.

In the end I want to migrate from MySQL 5.6 to Postgre 9 - did you manage to find another way by any chance?

Gaven Ray November 22, 2021

I am having the same problem. I try to create an XML export of Prod to import into our Dev site to test PSQL. XML export times out both with and without attachments.

Joseph November 22, 2021

In the end I was successfull because I threw all my available memory at the VM and made sure to run the export after business hours. I also modified MySQL settings after a consultation with a DBA.

  • Increase allowed packets and timeouts in mysql
  • Enable autoreconnect in JDBC
  • Granting 32GB of memory to the VM and 16GB to the JVM

The export took around 3 hours and was 1GB in size. I probably didn't need to grant the whole 32GB to the VM but the 16GB for the JVM where necessary. I also had a look at the log files during the export just to see if the process might got stuck at some point. (Luckily it didn't!)


I set up a new Confluence installation with a separate postgres DB and granted the JVM 16GB of memory again. This was necessary because otherwise I ran into issues during the import process as well. Importing the 1GB XML file into the new installation took around 1 hour.

Suggest an answer

Log in or Sign up to answer
TAGS
AUG Leaders

Atlassian Community Events