Create
cancel
Showing results for 
Search instead for 
Did you mean: 
Sign up Log in
  • Community
  • Products
  • Confluence
  • Questions
  • Restore XML backup gives error: Could not import data in table 'AO_9412A1_AOREGISTRATION' column #1, value is too big for column which size limit is 65535,

Restore XML backup gives error: Could not import data in table 'AO_9412A1_AOREGISTRATION' column #1, value is too big for column which size limit is 65535,

Niels van Drimmelen November 22, 2012

While doing a test migration from our Windows based/MSSQL2008 installation to a Linux/MySql installation, the XML restore (during the setup process) throws an error:

Import failed. Check your server logs for more information. com.atlassian.activeobjects.spi.ActiveObjectsImportExportException: There was an error during import/export with plugin Notifications and Tasks - Host Plugin(com.atlassian.mywork.mywork-confluence-host-plugin) #1.0.3:Could not import data in table 'AO_9412A1_AOREGISTRATION' column #1, value is too big for column which size limit is 65535, value is: {application:com.atlassian.mywork.providers.confluence,appId:,i18n:{:{com.atlassian.mywork.providers.confluence.blog.comment.aggregate:{0} comment on {1},com.atlassian.mywork.providers.confluence.action.comment.displayName:Comment,com.atlassian.mywork.providers.confluence.page.task.update.aggregate:{0} update to task on {1},com.atlassian.mywork.providers.confluence.page.edit.aggregatenew:{0} new edit on {1},com.atlassian.mywork.providers.confluence.blog.comment.aggregatenew:{0} new comment on {1},com.atlassian.mywork.providers.confluence.comment.mentions.user.title:{user} mentioned you in {title},com.atlassian.mywork.providers.confluence.comment.like.aggregatenews:{0} new likes of {1},com.atlassian.mywork.providers.confluence.blog.mentions.user.aggregate:{0} mention in {1},com.atlassian.mywork.providers.confluence.blog.comment.aggregates:{0} comments on {1},com.atlassian.mywork.providers.confluence.blog.task.update.aggregates:{0} updates to tasks on {1},com.atlassian.mywork.providers.confluence.blog.mentions.user.older:{0} older mention on this page,com.atlassian.mywork.providers.confluence.blog.share.aggregatenew:{0} new share of {1},com.atlassian.mywork.providers.confluence.page.task.update.aggregates:{0} updates to tasks on

4 answers

1 accepted

3 votes
Answer accepted
RianA
Rising Star
Rising Star
Rising Stars are recognized for providing high-quality answers to other users. Rising Stars receive a certificate of achievement and are on the path to becoming Community Leaders.
December 16, 2012

Hey guys,

It has been raised as a bug. For more information please refer to the following bug report.
* https://jira.atlassian.com/browse/CONF-27513

Feel free to check on the description, and please do comment and add yourself as a watcher for future update.

1 vote
Andy Brook [Plugin People]
Rising Star
Rising Star
Rising Stars are recognized for providing high-quality answers to other users. Rising Stars receive a certificate of achievement and are on the path to becoming Community Leaders.
November 27, 2012

Your data is too big for the column, thats easy to see. This happens a lot when developer expectations of use are exceeded by users. I have found that AO will not make schema changes, even if lossless (ie field declared as bigger in later releases of the plugin), the solution to this is to simply make the database type TEXT, which translates to unlimited. AO will not I think complain, or try to make any schema changes to revert, and the problem seen will go away. I've done this a bunch of times...

Probably should backup your database first just to be safe.

Niels van Drimmelen November 28, 2012

Hi Andy,

Not sure if I read you correctly. You mean changing the data type to TEXT on the source SQL Server schema, or on the receiving MySQL schema?

Thanks!

Niels

Andy Brook [Plugin People]
Rising Star
Rising Star
Rising Stars are recognized for providing high-quality answers to other users. Rising Stars receive a certificate of achievement and are on the path to becoming Community Leaders.
November 28, 2012

Your receiving schema has the problem, you'll find a table of the given name, locate the column within it and update its type to allow 'more' data. When I ran into this I created some scripts, see this link.

Niels van Drimmelen November 28, 2012

I tried. But the change I made to the schema is reverted when I restore the XML based backup. Interesting enough the plugin is Confluence system plugin.

Andy Brook [Plugin People]
Rising Star
Rising Star
Rising Stars are recognized for providing high-quality answers to other users. Rising Stars receive a certificate of achievement and are on the path to becoming Community Leaders.
November 28, 2012

Hmm, interesting. Support call required I think, https://support.atlassian.com

Niels van Drimmelen November 29, 2012

Yeah, already working with support for a couple of days. The plan they came up with:

  1. Since having a different DATATYPE on your table creation might leads to multiple issue that we are not sure about, we highly recommend to use the previous resolution (data migration with native database tools) only as a very last resort
  2. The other method to migrate your instance data is only to use XML restore
  3. Hence we need to ensure that your XML file is valid and reliable
  4. In order to confirm that, we would want to replicate the restoration issue on our end and see if we are able to investigate and resolve it on our end
  5. Please provide us your XML backup without attachment for further investigation
  6. If XML import is not something that we are able to resolved, then we will use the migration tool method and putting a notes on your instances note to ensure that the next engineer understand your situation (different DATATYPE)
0 votes
Michael Regelin
Rising Star
Rising Star
Rising Stars are recognized for providing high-quality answers to other users. Rising Stars receive a certificate of achievement and are on the path to becoming Community Leaders.
December 16, 2012

I read with lot of interets this post.

We are going to import 5 huges spaces from Twiki to Confluence (between 400Mb and 1Gb).

The UWC importe gives me some xml.zip files wich i then import in confluence (copying to the restore folder and after restore from admin panel).

Is there another way "cleaner" to import the xml.zip file generated by the uwc ?

thanks if someone has advice on this .

sincerly,

Michael

Niels van Drimmelen January 1, 2013

Hi Michael,

My suggestion would be to contact Atlassian supoort on your issue, or create a new question. Your question is a bit of topic in here.

Regards

Niels

0 votes
CharlesH
Rising Star
Rising Star
Rising Stars are recognized for providing high-quality answers to other users. Rising Stars receive a certificate of achievement and are on the path to becoming Community Leaders.
November 22, 2012

Hi Niels,

Some basic things to check which may help:

  • The version of Confluence is the same on both instances? A minor difference should be okay ( e.g. 4.2.1 to 4.2.2) but not anything more significant than that.
  • Is your database encoding the same on both instances? e.g. UTF8
  • Have you got the same plugins on both instances?

Regards,

Charles

Niels van Drimmelen November 27, 2012

Working with Atlassian support on this issue.

Atlasian strongly advices against using the XML backup and restore mechanism. So I'm in the middle of trying to migrate using database tools. While I got it work, one issue remains:

The SQL Server based database schema differs from a MySQL based schema. When you do a migration with database tools you end up with a MySQL schema which follows the original SQL Server schema.

Question is: does Atlassian supports a MySQL schema which is different than one created by the Confluence installer? Because I can imagine this could issues in the future while upgrading to newer versions.

Niels

Suggest an answer

Log in or Sign up to answer
TAGS
AUG Leaders

Atlassian Community Events