I'm using the AWS CloudFormation template to setup confluence data center on AWS.
I had installed it and got it up and running with Confluence 6.2.2. As we wanted to upgrade to the latest confluence version, I deleted the stack and relaunched it with 6.10.1 version.
Setup completed successfully and confluence is running. However, I noticed that the EFS volume, which is supposed to be symbolically linked to the 'shared-home' folder, is not mounted.
The EFS volume which is supposed to be found in the '/media' folder is not present. Hence, cannot enable clustering or collaborative editing !!
I did not have this issue when deploying on the 6.2.2 version.
Hi @Yohaan Sunny,
The EFS volume was probably deleted as well when the stack was deleted.
Steps to upgrade Confluence DC.
Let me know of any questions.
Yes. I wanted the EFS also to be deleted as I wanted a fresh install of the new confluence version. Which is why I deleted the stack completely.
But when I deploy the template again it should just create a new one and mount it to the DC nodes. This is what did not happen.
I spend a whole day deploying and deleting stacks with different versions to find out why this happens and I think I figured it out.
Weirdly enough, this issue happens when I deploy the stack with the Route53 options provided. If I leave the options for hosted zone and subdomain blank, the EFS is mounted correctly and the symlink to shared-home folder is created.
The first time when I deployed it I did not provide the Route53 values. I confirmed this by deploying the stack with 6.2.2 version by providing those values and the EFS volume did not mount.
As a final test, deployed 6.10.1 version without Route53 and the EFS is mounted correctly with the symlink to shared-home.
This is a problem now as my server.xml file won't have the correct proxy name value!!
I am not sure why Route53 parameters would affect the EFS??
Yes. I wanted to perform a fresh install of the new confluence version which is why I deleted the stack completely.
But when I deploy it again as a new install with the new version, it does not mount the EFS volume. The volume gets created though.
After spending a whole day deploying and deleting the stack with different versions and parameters, I think I figured out the problem.
Weirdly enough the issue seems to be related to the Route53 parameters in the template. When deploying the template leaving the Route53 parameters blank, the EFS volume is mounted and the symlink to the shared folder created successfully.
But when I give the values for the parameters 'Hosted Zone' and 'Sub Domain' in the template, the EFS does not get mounted.
I had left these parameters a blank during my initial deployment with 6.2.2, which explains why I did not have this issue the first time.
As a final test, I deployed the template again with 6.10.1 leaving the Route53 values blank (no change to any of the other parameters) and the EFS volume has been mounted successfully and shared-home symlink created.
But this is still a problem because now my server.xml is not configured with the correct proxy name value!!!
I am not sure why Route53 parameters would affect the EFS mount?!?
Not got an answer to this yet, but I think the reason the EFS volume do not get mounted when Route53 values are included is because the DNS name used is an external one and not a Route53 registered DNS.
When these values are included the mount commands the AWS template uses are different from the normal NFS mount command.
Two vulnerabilities have been published for Confluence Server and Data Center recently: March 20, 2019 CVE-2019-3395 / CVE-2019-3396 April 17, 2019 CVE-2019-3398 The goal of this article is...
Connect with like-minded Atlassian users at free events near you!Find a group
Connect with like-minded Atlassian users at free events near you!
Unfortunately there are no AUG chapters near you at the moment.Start an AUG
You're one step closer to meeting fellow Atlassian users at your local meet up. Learn more about AUGs