I want to migrate some contents from my MediaWiki to Confluence.
I researched about UWC converter and tried it out with my local instances and it was working perfectly. But when I tried it out on out production instances it started giving some errors. After investigating I found out that that the pages of Mediawiki gets stored in the "text" table of the database.
I found out that our Mediawiki's "text" table stores the pages in "UTF-8, gzip" format while the actual instance stores it in "UTF-8" format. So whenever I try to export mediawiki content using UWC, it produces text files which have binary values in it. This is why while importing the pages to Confluence it produces error.
Can anyone please help me on how can I remove this error or how do I address this problem.
Not a tested solution, but I'd try to migrate the content writing a php script to extract the gzip data with gzipdeflate() and re-insert into an exact copy of the original TEXT table. Then for the time of migration, switch between the two tables. Ugly workaround, but could do the trick.
Happy Friday Everyone! Today marks the international release of Disney's live action version of the animated classic Aladdin. I know that this movie was met with some controversy of over cast...
Connect with like-minded Atlassian users at free events near you!Find an event
Connect with like-minded Atlassian users at free events near you!
Unfortunately there are no Community Events near you at the moment.Host an event
You're one step closer to meeting fellow Atlassian users at your local event. Learn more about Community Events