Hello everyone,
we recently switched from a single server to a clustered environment. I'm in the habit of checking the atlassian-confluence.log files on the server itself. Now we have three nodes, each with a different set of log files. So when e.g. a user sends me a tracker-id i have to check on each of the servers to see in which of the log files the relevant messages show up.
Does anyone have a recommendation on how to check the log files simultaneously? Respectively, are there log4j options to make confluence write in a shared log file (i'd actually prefer that)? Any other suggestions/recommendations are appreciated :)
cheers
You don't want them to write to a single file, because a) that's 3 times as many processes sharing write lock and b) it defeats the purpose of clustering.
Also I don't think that Log4J supports two different file appenders (unless you built some monster on top of all that to converge the log files, and that would be disgusting).
When a node is afflicted, you currently can diagnose that node as more of a standalone, figure out the problem => patch it up (typically you're patching the cluster as a whole).
If you had that merged, you'd be looking all over the place and connecting different things with different things that aren't related to one another.
There's far too many more cons than pros.
So whichever way it is you're currently checking the logs, just condense/automate it really. If you're fine with your process once but not fine doing it thrice, then just improve the process until you're fine doing it thrice just like you were doing it once. Or something wise like that.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.