If I upload a 300+ MB file to a Confluence page, sometimes it succeeds, sometimes not. But usually any file shorter than 400 MB can be uploaded to a Confluence page if I repeat the upload.
What settings to change to make it upload always on the first attempt?
Hello Jiri,
That's a very interesting case there. I would say it probably depends on what is causing the upload to fail. You will want to review your Confluence Server logs in order to see what is causing the failure.
I would recommend tailing your logs when you trigger the issue again, and you will be able to see what caused the failure. Feel free to send us a copy of those logs and we can help you try to figure that out.
Regards,
Shannon
A log sample is below - does it help?
2019-05-23 12:46:34,022 ERROR [http-nio-8090-exec-9] [persistence.dao.filesystem.FileSystemAttachmentDataUtil] failedToWriteTempFile Error writing '/var/atlassian/application-data/confluence/attachments/ver003/249/129/216629249/207/30/216530957/216531004/data2245229276127166799.tmp' to disk.
-- referer: https://XXXXX.com/pages/resumedraft.action?draftId=216530974&draftShareId=9782c1e8-74e7-4eb8-9881-87f3e862c5ba& | url: /plugins/drag-and-drop/upload.action | traceId: 2832e75cc4d88846 | userName: XXXX | action: upload
org.apache.catalina.connector.ClientAbortException: java.io.EOFException: Unexpected EOF read on the socket
at org.apache.catalina.connector.InputBuffer.realReadBytes(InputBuffer.java:340)
at org.apache.catalina.connector.InputBuffer.checkByteBufferEof(InputBuffer.java:632)
at org.apache.catalina.connector.InputBuffer.read(InputBuffer.java:362)
at org.apache.catalina.connector.CoyoteInputStream.read(CoyoteInputStream.java:132)
at com.google.common.io.CountingInputStream.read(CountingInputStream.java:62)
at java.io.FilterInputStream.read(FilterInputStream.java:107)
at org.apache.commons.io.IOUtils.copyLarge(IOUtils.java:2314)
at org.apache.commons.io.IOUtils.copy(IOUtils.java:2270)
at org.apache.commons.io.IOUtils.copyLarge(IOUtils.java:2291)
at org.apache.commons.io.IOUtils.copy(IOUtils.java:2246)
at com.atlassian.core.util.FileUtils.copyFile(FileUtils.java:410)
I can send the complete log by email or privately upload. I cannot put here the full log since the web app complains "Your message was not accepted. Check for invalid HTML or try reposting as plain text.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Hello Jiri,
Thank you for sharing this.
It appears that something might be ending the connection before you are able to upload the attachment.
You can also feel free to share your log files via Dropbox, Google Drive, or something like Pastebin.
Thank you!
Shannon
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
1. The Confluence is behind apache httpd proxy and some security software (security software logs are empty). The proxy is throwing this error
[Thu May 23 12:46:10.761618 2019] [proxy_http:error] [pid 22447:tid 139961852430080] (70007)The timeout specified has expired: [client XXXX:49021] AH02609: read request body failed to 127.0.0.1:8090
but I am not sure it's the actual issue since as I said, if I try to upload a couple of times, after 3-6 attempts I always succeed.
For sure, I will try tweaking as specified here https://serverfault.com/questions/500467/apache2-proxy-timeout but the upload is NOT a problem. I think the problem is consumption, not the production of data.
2. Anybody on my server has the identical problem.
3. I have actually tried Firefox and Chrome. In Firefox, it starts reuploading a couple of times, wile Chrome hangs before reporting a problem.
4. It happens only for MP4 movies bigger than 150 MB.
The logs are confidential so I can send them via email or private message.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
My present relevant httpd.conf is below
<Proxy *>
Require all granted
</Proxy>
ProxyTimeout 600
SSLProxyEngine On
ProxyRequests Off
ProxyPass /synchrony http://localhost:8091/synchrony
<Location /synchrony>
Require all granted
RewriteEngine on
RewriteCond %{HTTP:UPGRADE} ^WebSocket$ [NC]
RewriteCond %{HTTP:CONNECTION} Upgrade$ [NC]
RewriteRule .* ws://localhost:8091%{REQUEST_URI} [P]
</Location>
ProxyPass / http://localhost:8090/ keepalive=On
ProxyPassReverse / http://localhost:8090/
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Hello Jiri,
Thank you for providing that information.
The problem is that it is taking too long for the file to upload and exceeding the timeout, as you discovered.
The timeout can be an issue due to the network connection.
You will want to make sure that your Confluence server's network download speed and your local upload speed are fast enough to transfer the file within the timeout period. You can use the following transfer time calculator to determine the requirement.
Let me know if you have any questions about that.
Regards,
Shannon
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
I am on fiberoptics so the speed itself is not a problem.
I actually think that the problem could be memory and/or garbage collection - I enabled on httpd http/2 and it has certain number of threads and also Confluence has min and max memory setting.
I will play with the upload of huge files directly without the httpd proxy later this week.
THANK YOU FOR ALL YOUR HELP!
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Have managed to upload a 350 MB file directly via Tomcat with no problem, though it was very, very slow. The identical upload via httpd failed.
So, it looks like the problem is with httpd and the present hypothesis is that httpd is somehow caching the uploads (-> faster) but runs out of caches.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Hello Jiri!
So sorry for the delay. I was consulting my team on this. Your theory that the problem is with httpd caching the uploads and running out of caches might be right, especially if it's super slow with a proxy bypass.
Tomcat might be able to upload buffer a bit more if the heap isn't full. You can try to check the detailed memory information page, and see if you have more heap available than what you're trying to upload. For example, maybe 600MB?
One other thing that might be happening is that it is writing to the disk too slowly, and that's why the buffer is getting full. You can use the diagnostic tool to check that:
The theory is, if the upload can go faster with a proxy bypass, then Apache won't run into a timeout.
Let us know if you have any questions about that.
Regards,
Shannon
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Jiri,
No worries! Just let me know if you have any trouble.
Regards,
Shannon
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Shannon:
have finally found the cause and the sun is again showing across the galaxy :-).
It's the Apache's module mod_reqtimeout.
Disabling it solves the problem.
THANK YOU FOR ALL YOUR HELP.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Hi Jiri
Glad everything is well again. :) Thank you so much for the follow-up!
Take care, and have a pleasant rest of your week.
Regards,
Shannon
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.