Under Confluence 7.19.x, why is there a distinction between
/var/atlassian/application-data/confluence/log
and
/var/atlassian/application-data/confluence/logs ? "log" seems to contain just the "jfr" and "audit" folders. "logs" seems to contain all "regular" logs.
Is it safe/advisable to mount these on 1) an NFS share that is 2) the same for both "log" and "logs"?
I guess the real question is (and what I am trying to achieve) is what is the best approach to getting all logs from an EKS hosted instance into a log aggregation service such as AWS Cloudwatch?
Thanks!
/var/atlassian/application-data/confluence is a local home volume. If you scale your statefulset pods will overwrite it. I think the best way is to have a sidecar container that tails selected logs and sends them to logging backend.
@Yevhen How does the sidecar container get the Confluence log data, if not via NFS? That's the piece I am missing. I got fluent-bit to forward all the container/pod logs to Cloudwatch, but the Confluence application logs are missing when I look in Cloudwatch.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
@robert.haskins the sidecar should mount local-home volume mount that does not have to be NFS. It can be RWO. And then it's just all about flient bit configuration which files to tail and where to send them
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
I think the only reason there's two folders in there is that it's an internal Atlassian inconsistency in their programming. One module forgot the "s" and no tested or cares to fix it. I've never gotten any value from the logs folder, fwiw, and it's never bloated or even got files, but it auto-generates at launch if you deleted them.
It's perfectly safe to mount them on the same share, as it probably is an advisable measure to safeguard from future oops with the "s" in the code. lol
I transfer logs regularly to S3 with the following code - this is for server, so you'll need to tweak it to include the shared home for DC.
### CONFIGURATION - Root Variables ###
######################################
## Update the S3 root for the application in question with NO trailing slash
#
## Target S3 Log Folder Root is:
S3FolderRoot="s3://<FOLDER>/<Target With No Slash At End>"
## Update the target root for the relevant Atlassian application with NO trailing slash
#
## Confluence Home logs directory is:
HomeDirLogsRoot="/<HOME DIR ROOT>/atlassian/application-data/confluence/logs"
## Update the target root for the relevant Atlassian application with NO trailing slash
#
## Confluence Install logs directory is:
InstallDirLogsRoot="/opt/atlassian/confluence/logs"
### Script Below - do not modify ###
#########################################################################################
#
# Note: Making this a variable instead of a function means that the timestamp is made at launch.
# It will not change as the copying proceeds for each run of the script and minutes tick by.
#
timestamp=`date +%Y-%m-%d_%H-%M` #Sets timestamp for the copy to use as a root folder in S3
#echo $timestamp
s3TimestampFolderRoot="$S3FolderRoot/$timestamp" #appends the timestamp on the S3 target directory
#echo $s3TimestampFolderRoot
### Target Variables
S3HomeTarget="$s3TimestampFolderRoot/AppLogs" #Builds root folder for S3 for log set from Home Directory (Application Logs)
#echo $S3HomeTarget
S3InstallTarget="$s3TimestampFolderRoot/TomcatLogs" #Builds root folder for S3 for log set from Install Directory (Tomcat Logs)
#echo $S3InstallTarget
## Each of the Directory Roots goes through the specified log directory root recursively. It copies the folder structure then copies the files.
## The results will appear on the screen in reverse order, but the results will be correct.
## Additional directory targets can be added by copying the For-->done block and adding new Root & Target Variables, accordingly.
umask 0022
echo Ensuring Confluence is stopped to copy logs.
/etc/init.d/confluence stop
for entry in "$HomeDirLogsRoot"/*;
do
name=`echo $entry | sed 's/.*\///'` # getting the name of the file or directory
echo $entry -- $name
if [[ -d $entry ]]; then # if it is a directory
#echo $name is a directory and it goes translates to $S3HomeTarget/$name/
aws s3 --region us-gov-east-1 cp --recursive "$entry" "$S3HomeTarget/$name/"
else # if it is a file
#echo $name is a file and it goes into $S3HomeTarget/
aws s3 --region us-gov-east-1 cp "$entry" "$S3HomeTarget/"
fi
done
for entry in "$InstallDirLogsRoot"/*;
do
name=`echo $entry | sed 's/.*\///'` # getting the name of the file or directory
if [[ -d $entry ]]; then # if it is a directory
# echo $name is a directory and it goes translates to $S3InstallTarget/$name/
aws s3 --region us-gov-east-1 cp --recursive "$entry" "$S3InstallTarget/$name/"
else # if it is a file
# echo $name is a file and it goes into $S3InstallTarget/
aws s3 --region us-gov-east-1 cp "$entry" "$S3InstallTarget/"
fi
done
echo Transfer completed, starting Confluence service can be done with the following command:
echo /etc/init.d/confluence start
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
neither the log nor the logs directory should be mounted on the shared home. It looks like this link has some information on what you are asking about.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
@Kian Stack Mumo Systems The article you reference only mentions audit logs. So you are implying that the audit log encompasses *all* of the application log files in "logs"? When I look at my systems, there is *much* more log information in the application log ("logs") folder than there is in the audit "("log") folder. These are stock 7.19.x systems without customizations.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
@robert.haskins,
No, I am not implying that the audit log encompasses all of the application logs. I'm saying that it shows you how to integrate one of the log files with CloudWatch and that you should be able to apply similar steps for any remaining log files that you want to integrate.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
@Kian Stack Mumo Systems I guess the piece I am missing is how do the Confluence logs get to the log aggregator pod? I assumed that that was an nfs share, but obviously there is some other mechanism. I have container log aggregation working to Cloudwatch via fluent-bit, but I see no Confluence application logs when I look in Cloudwatch -- all the logs in Cloudwatch are at the pod/container level.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
I'm not familiar with some of the terms you used, but as far as I can tell you need to install the CloudWatch agent to push the logs to Cloudwatch. Each node will have a separate log file as the exact errors/logs for each node are separate. For example, one node could be experiencing out of memory errors if there is a substantial amount of rest traffic going to it whereas the other one is operating fine.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.