You're on your way to the next level! Join the Kudos program to earn points and save your progress.
Level 1: Seed
25 / 150 points
1 badge earned
Challenges come and go, but your rewards stay with you. Do more to earn more!
What goes around comes around! Share the love by gifting kudos to your peers.
Keep earning points to reach the top of the leaderboard. It resets every quarter so you always have a chance!
Join now to unlock these features and more
The Atlassian Community can help you and your team get more value out of Atlassian products and practices.
as Confluence recently get zero downtime upgrades for feature releases (https://jira.atlassian.com/browse/CONFSERVER-52686) I was thinking again if I really like ZDU independent from the marketing phrases. It sound absolutly good to reduce downtime / impact for end-users specifically on large instances but on the other hand it gives you new challenges.
And the main challenge in my opinion is then consistent backups and in the worst case the restore when things go wrong.
The internal upgrade tasks itself become more stable in the past but still there is the chance that it might fail.
With Confluence 7.13 and some garbage data (Gliffy) in bandana table the upgrade task for XStream was failing and the instance stopped immediatly. So these cases still happen these days.
What we need for a consistent backup (example Confluence):
Depeding on the size backups are challenging to keep everything consitent also with incremental backups.
Putting the Confluence into read-only mode would mitigate and help to backup the data nearly consistent when everything is running but it's also an impact to end-users as they can't edit / comment anymore.
So what's you opinion on ZDU and backups?