After the upgrade to the runner version 3.6.0 we are unable to upload "larger" artifacts (> 50 MiB).
The pipeline terminates with a "System error":
In the runner log files, you can see this error messages:
[2024-11-20 12:17:19,347] Updating step progress to UPLOADING_ARTIFACTS.
[2024-11-20 12:17:20,054] Appending log line to main log.
[2024-11-20 12:17:27,577] Initiating artifact upload.
[2024-11-20 12:17:27,866] Successfully got total chunks FileChunksInfo{dataSize=262224310B, totalChunks=6}.
[2024-11-20 12:17:27,870] Uploading 6 chunks to s3
[2024-11-20 12:17:27,872] Getting s3 upload urls for artifact.
[2024-11-20 12:17:28,042] Appending log line to main log.
[2024-11-20 12:17:36,462] Updating runner state to "ONLINE".
[2024-11-20 12:17:38,864] [13d480a9-1, L:/192.168.150.119:64555 - R:micros--prod-east--bitbucketci-file-service--files.s3.amazonaws.com/16.15.179.177:443] The connection observed an error
io.netty.handler.timeout.ReadTimeoutException: null
[2024-11-20 12:17:38,865] [f58a16bd-1, L:/192.168.150.119:64556 - R:micros--prod-east--bitbucketci-file-service--files.s3.amazonaws.com/16.15.179.177:443] The connection observed an error
io.netty.handler.timeout.ReadTimeoutException: null
[2024-11-20 12:17:38,865] [98ad0830-1, L:/192.168.150.119:64553 - R:micros--prod-east--bitbucketci-file-service--files.s3.amazonaws.com/16.15.179.177:443] The connection observed an error
io.netty.handler.timeout.ReadTimeoutException: null
[2024-11-20 12:17:38,865] [3688902a-1, L:/192.168.150.119:64552 - R:micros--prod-east--bitbucketci-file-service--files.s3.amazonaws.com/16.15.179.177:443] The connection observed an error
io.netty.handler.timeout.ReadTimeoutException: null
[2024-11-20 12:17:38,865] [93d7c715-1, L:/192.168.150.119:64554 - R:micros--prod-east--bitbucketci-file-service--files.s3.amazonaws.com/16.15.179.177:443] The connection observed an error
io.netty.handler.timeout.ReadTimeoutException: null
[2024-11-20 12:17:51,112] [6662d841-1, L:/192.168.150.119:64569 - R:micros--prod-east--bitbucketci-file-service--files.s3.amazonaws.com/16.15.179.177:443] The connection observed an error
io.netty.handler.timeout.ReadTimeoutException: null
[2024-11-20 12:17:51,123] [c69d1a83-1, L:/192.168.150.119:64572 - R:micros--prod-east--bitbucketci-file-service--files.s3.amazonaws.com/16.15.179.177:443] The connection observed an error
io.netty.handler.timeout.ReadTimeoutException: null
[2024-11-20 12:17:51,123] [326f0762-1, L:/192.168.150.119:64570 - R:micros--prod-east--bitbucketci-file-service--files.s3.amazonaws.com/16.15.179.177:443] The connection observed an error
io.netty.handler.timeout.ReadTimeoutException: null
[2024-11-20 12:17:51,123] [84638be8-1, L:/192.168.150.119:64573 - R:micros--prod-east--bitbucketci-file-service--files.s3.amazonaws.com/16.15.179.177:443] The connection observed an error
io.netty.handler.timeout.ReadTimeoutException: null
[2024-11-20 12:17:51,123] [a803374a-1, L:/192.168.150.119:64571 - R:micros--prod-east--bitbucketci-file-service--files.s3.amazonaws.com/16.15.179.177:443] The connection observed an error
io.netty.handler.timeout.ReadTimeoutException: null
[2024-11-20 12:18:05,365] [bc13356c-1, L:/192.168.150.119:64596 - R:micros--prod-east--bitbucketci-file-service--files.s3.amazonaws.com/16.15.179.177:443] The connection observed an error
io.netty.handler.timeout.ReadTimeoutException: null
[2024-11-20 12:18:05,385] [4f188350-1, L:/192.168.150.119:64598 - R:micros--prod-east--bitbucketci-file-service--files.s3.amazonaws.com/16.15.179.177:443] The connection observed an error
io.netty.handler.timeout.ReadTimeoutException: null
[2024-11-20 12:18:05,385] [f67fc8d9-1, L:/192.168.150.119:64597 - R:micros--prod-east--bitbucketci-file-service--files.s3.amazonaws.com/16.15.179.177:443] The connection observed an error
io.netty.handler.timeout.ReadTimeoutException: null
[2024-11-20 12:18:05,385] [ec79b859-1, L:/192.168.150.119:64599 - R:micros--prod-east--bitbucketci-file-service--files.s3.amazonaws.com/16.15.179.177:443] The connection observed an error
io.netty.handler.timeout.ReadTimeoutException: null
[2024-11-20 12:18:05,385] [494c338d-1, L:/192.168.150.119:64600 - R:micros--prod-east--bitbucketci-file-service--files.s3.amazonaws.com/16.15.179.177:443] The connection observed an error
io.netty.handler.timeout.ReadTimeoutException: null
[2024-11-20 12:18:06,462] Updating runner state to "ONLINE".
[2024-11-20 12:18:23,618] [57659f01-1, L:/192.168.150.119:64624 - R:micros--prod-east--bitbucketci-file-service--files.s3.amazonaws.com/52.216.40.233:443] The connection observed an error
io.netty.handler.timeout.ReadTimeoutException: null
[2024-11-20 12:18:23,623] [0c9c6507-1, L:/192.168.150.119:64627 - R:micros--prod-east--bitbucketci-file-service--files.s3.amazonaws.com/52.216.40.233:443] An exception has been observed post termination, use DEBUG level to see the full stack: io.netty.handler.timeout.ReadTimeoutException
[2024-11-20 12:18:23,623] [01623a99-1, L:/192.168.150.119:64626 - R:micros--prod-east--bitbucketci-file-service--files.s3.amazonaws.com/52.216.40.233:443] An exception has been observed post termination, use DEBUG level to see the full stack: io.netty.handler.timeout.ReadTimeoutException
[2024-11-20 12:18:23,623] [0cf33d6e-1, L:/192.168.150.119:64625 - R:micros--prod-east--bitbucketci-file-service--files.s3.amazonaws.com/52.216.40.233:443] An exception has been observed post termination, use DEBUG level to see the full stack: io.netty.handler.timeout.ReadTimeoutException
[2024-11-20 12:18:23,623] [3daaff0f-1, L:/192.168.150.119:64628 - R:micros--prod-east--bitbucketci-file-service--files.s3.amazonaws.com/52.216.40.233:443] An exception has been observed post termination, use DEBUG level to see the full stack: io.netty.handler.timeout.ReadTimeoutException
[2024-11-20 12:18:23,623] Error while uploading file to s3
io.netty.handler.timeout.ReadTimeoutException: null
Wrapped by: org.springframework.web.reactive.function.client.WebClientRequestException: nested exception is io.netty.handler.timeout.ReadTimeoutException
at org.springframework.web.reactive.function.client.ExchangeFunctions$DefaultExchangeFunction.lambda$wrapException$9(ExchangeFunctions.java:141)
Suppressed: reactor.core.publisher.FluxOnAssembly$OnAssemblyException:
Error has been observed at the following site(s):
*__checkpoint Γçó Request to PUT https://micros--prod-east--bitbucketci-file-service--files.s3.amazonaws.com/artifact/%7Bcf74d650-59...REMOVED...97fcfc38d%7D/%7B6aaf5109...REMOVED...ca7458ffec%7D/%7B4afec5e1-...REMOVED...96acb1bc8%7D/artifact_%7B64...REMOVED...c663%7D.tar.gz?partNumber=5&uploadId=HUt1y...REMOVED...Dg21...REMOVED...vAQH4czg--&X-Amz-Security-Token=IQoJb3...REMOVED...3D%3D&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Date=20241120T111731Z&X-Amz-SignedHeaders=host&X-Amz-Credential=ASIATNIC...REMOVED...%2F20241120%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Expires=900&X-Amz-Signature=20e...REMOVED...2d8 [DefaultWebClient]
Original Stack Trace:
at org.springframework.web.reactive.function.client.ExchangeFunctions$DefaultExchangeFunction.lambda$wrapException$9(ExchangeFunctions.java:141)
at reactor.core.publisher.MonoErrorSupplied.subscribe(MonoErrorSupplied.java:55)
at reactor.core.publisher.Mono.subscribe(Mono.java:4491)
at reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onError(FluxOnErrorResume.java:103)
at reactor.core.publisher.FluxPeek$PeekSubscriber.onError(FluxPeek.java:222)
at reactor.core.publisher.FluxPeek$PeekSubscriber.onError(FluxPeek.java:222)
at reactor.core.publisher.FluxPeek$PeekSubscriber.onError(FluxPeek.java:222)
at reactor.core.publisher.MonoNext$NextSubscriber.onError(MonoNext.java:93)
at org.springframework.cloud.sleuth.instrument.reactor.ScopePassingSpanSubscriber.onError(ScopePassingSpanSubscriber.java:96)
at reactor.core.publisher.MonoFlatMapMany$FlatMapManyMain.onError(MonoFlatMapMany.java:204)
at reactor.core.publisher.SerializedSubscriber.onError(SerializedSubscriber.java:124)
at reactor.core.publisher.FluxRetryWhen$RetryWhenMainSubscriber.whenError(FluxRetryWhen.java:225)
at reactor.core.publisher.FluxRetryWhen$RetryWhenOtherSubscriber.onError(FluxRetryWhen.java:274)
at org.springframework.cloud.sleuth.instrument.reactor.ScopePassingSpanSubscriber.onError(ScopePassingSpanSubscriber.java:96)
at reactor.core.publisher.FluxContextWrite$ContextWriteSubscriber.onError(FluxContextWrite.java:121)
at reactor.core.publisher.FluxConcatMap$ConcatMapImmediate.drain(FluxConcatMap.java:415)
at reactor.core.publisher.FluxConcatMap$ConcatMapImmediate.onNext(FluxConcatMap.java:251)
at org.springframework.cloud.sleuth.instrument.reactor.ScopePassingSpanSubscriber.onNext(ScopePassingSpanSubscriber.java:89)
at reactor.core.publisher.FluxContextWrite$ContextWriteSubscriber.onNext(FluxContextWrite.java:107)
at reactor.core.publisher.EmitterProcessor.drain(EmitterProcessor.java:537)
at reactor.core.publisher.EmitterProcessor.tryEmitNext(EmitterProcessor.java:343)
at reactor.core.publisher.SinkManySerialized.tryEmitNext(SinkManySerialized.java:100)
at reactor.core.publisher.InternalManySink.emitNext(InternalManySink.java:27)
at reactor.core.publisher.FluxRetryWhen$RetryWhenMainSubscriber.onError(FluxRetryWhen.java:190)
at org.springframework.cloud.sleuth.instrument.reactor.ScopePassingSpanSubscriber.onError(ScopePassingSpanSubscriber.java:96)
at reactor.core.publisher.MonoCreate$DefaultMonoSink.error(MonoCreate.java:201)
at reactor.netty.http.client.HttpClientConnect$HttpObserver.onUncaughtException(HttpClientConnect.java:403)
at reactor.netty.ReactorNetty$CompositeConnectionObserver.onUncaughtException(ReactorNetty.java:700)
at reactor.netty.resources.DefaultPooledConnectionProvider$DisposableAcquire.onUncaughtException(DefaultPooledConnectionProvider.java:211)
at reactor.netty.resources.DefaultPooledConnectionProvider$PooledConnection.onUncaughtException(DefaultPooledConnectionProvider.java:464)
at reactor.netty.channel.FluxReceive.drainReceiver(FluxReceive.java:247)
at reactor.netty.channel.FluxReceive.onInboundError(FluxReceive.java:468)
at reactor.netty.channel.ChannelOperations.onInboundError(ChannelOperations.java:508)
at reactor.netty.channel.ChannelOperationsHandler.exceptionCaught(ChannelOperationsHandler.java:145)
at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:346)
at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:325)
at io.netty.channel.AbstractChannelHandlerContext.fireExceptionCaught(AbstractChannelHandlerContext.java:317)
at io.netty.handler.timeout.ReadTimeoutHandler.readTimedOut(ReadTimeoutHandler.java:98)
at io.netty.handler.timeout.ReadTimeoutHandler.channelIdle(ReadTimeoutHandler.java:90)
at io.netty.handler.timeout.IdleStateHandler$ReaderIdleTimeoutTask.run(IdleStateHandler.java:525)
at io.netty.handler.timeout.IdleStateHandler$AbstractIdleTask.run(IdleStateHandler.java:497)
at io.netty.util.concurrent.PromiseTask.runTask(PromiseTask.java:98)
at io.netty.util.concurrent.ScheduledFutureTask.run(ScheduledFutureTask.java:153)
at io.netty.util.concurrent.AbstractEventExecutor.runTask(AbstractEventExecutor.java:173)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:166)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:470)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:569)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.base/java.lang.Thread.run(Thread.java:1575)
Wrapped by: com.atlassian.pipelines.runner.core.exception.S3UploadException: Failed to upload chunk, part number 5
at com.atlassian.pipelines.runner.core.util.file.upload.S3MultiPartUploaderImpl.lambda$uploadChunk$16(S3MultiPartUploaderImpl.java:167)
at io.reactivex.internal.operators.single.SingleResumeNext$ResumeMainSingleObserver.onError(SingleResumeNext.java:73)
at io.reactivex.internal.operators.flowable.FlowableSingleSingle$SingleElementSubscriber.onError(FlowableSingleSingle.java:97)
at io.reactivex.subscribers.SerializedSubscriber.onError(SerializedSubscriber.java:142)
at io.reactivex.internal.operators.flowable.FlowableRepeatWhen$WhenReceiver.onError(FlowableRepeatWhen.java:112)
at io.reactivex.internal.operators.flowable.FlowableFlatMap$MergeSubscriber.checkTerminate(FlowableFlatMap.java:572)
at io.reactivex.internal.operators.flowable.FlowableFlatMap$MergeSubscriber.drainLoop(FlowableFlatMap.java:379)
at io.reactivex.internal.operators.flowable.FlowableFlatMap$MergeSubscriber.drain(FlowableFlatMap.java:371)
at io.reactivex.internal.operators.flowable.FlowableFlatMap$MergeSubscriber.innerError(FlowableFlatMap.java:611)
at io.reactivex.internal.operators.flowable.FlowableFlatMap$InnerSubscriber.onError(FlowableFlatMap.java:677)
at io.reactivex.internal.subscriptions.EmptySubscription.error(EmptySubscription.java:55)
at io.reactivex.internal.operators.flowable.FlowableError.subscribeActual(FlowableError.java:40)
at io.reactivex.Flowable.subscribe(Flowable.java:14935)
at io.reactivex.Flowable.subscribe(Flowable.java:14882)
at io.reactivex.internal.operators.flowable.FlowableFlatMap$MergeSubscriber.onNext(FlowableFlatMap.java:163)
at io.reactivex.internal.operators.flowable.FlowableDoOnEach$DoOnEachSubscriber.onNext(FlowableDoOnEach.java:92)
at io.reactivex.internal.operators.flowable.FlowableFlatMap$MergeSubscriber.tryEmitScalar(FlowableFlatMap.java:234)
at io.reactivex.internal.operators.flowable.FlowableFlatMap$MergeSubscriber.onNext(FlowableFlatMap.java:152)
at io.reactivex.internal.operators.flowable.FlowableZip$ZipCoordinator.drain(FlowableZip.java:249)
at io.reactivex.internal.operators.flowable.FlowableZip$ZipSubscriber.onNext(FlowableZip.java:381)
at io.reactivex.processors.UnicastProcessor.drainFused(UnicastProcessor.java:362)
at io.reactivex.processors.UnicastProcessor.drain(UnicastProcessor.java:395)
at io.reactivex.processors.UnicastProcessor.onNext(UnicastProcessor.java:457)
at io.reactivex.processors.SerializedProcessor.onNext(SerializedProcessor.java:103)
at io.reactivex.internal.operators.flowable.FlowableRepeatWhen$WhenSourceSubscriber.again(FlowableRepeatWhen.java:171)
at io.reactivex.internal.operators.flowable.FlowableRetryWhen$RetryWhenSubscriber.onError(FlowableRetryWhen.java:76)
at io.reactivex.internal.operators.single.SingleToFlowable$SingleToFlowableObserver.onError(SingleToFlowable.java:67)
at io.reactivex.internal.operators.single.SingleUsing$UsingSingleObserver.onError(SingleUsing.java:175)
at io.reactivex.internal.operators.single.SingleMap$MapSingleObserver.onError(SingleMap.java:69)
at io.reactivex.internal.operators.single.SingleMap$MapSingleObserver.onError(SingleMap.java:69)
at io.reactivex.internal.operators.single.SingleObserveOn$ObserveOnSingleObserver.run(SingleObserveOn.java:79)
at brave.propagation.CurrentTraceContext$1CurrentTraceContextRunnable.run(CurrentTraceContext.java:264)
at com.atlassian.pipelines.common.trace.rxjava.CopyMdcSchedulerHandler$CopyMdcRunnableAdapter.run(CopyMdcSchedulerHandler.java:74)
at io.reactivex.Scheduler$DisposeTask.run(Scheduler.java:608)
at brave.propagation.CurrentTraceContext$1CurrentTraceContextRunnable.run(CurrentTraceContext.java:264)
at com.atlassian.pipelines.common.trace.rxjava.CopyMdcSchedulerHandler$CopyMdcRunnableAdapter.run(CopyMdcSchedulerHandler.java:74)
at io.reactivex.internal.schedulers.ScheduledRunnable.run(ScheduledRunnable.java:66)
at io.reactivex.internal.schedulers.ScheduledRunnable.call(ScheduledRunnable.java:57)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317)
at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)
at java.base/java.lang.Thread.run(Thread.java:1575)
[2024-11-20 12:18:23,633] Updating step progress to PARSING_TEST_RESULTS.
[2024-11-20 12:18:23,900] Test report processing complete.
[2024-11-20 12:18:23,900] Updating step progress to COMPLETING_LOGS.
[2024-11-20 12:18:24,055] Appending log line to main log.
[2024-11-20 12:18:24,180] Shutting down log uploader.
[2024-11-20 12:18:24,388] Tearing down directories.
[2024-11-20 12:18:24,752] Cancelling timeout
[2024-11-20 12:18:24,753] Completing step with result Result{status=ERROR, error=Some(Error{key='runner.artifact.upload-error', message='Error occurred whilst processing an artifact', arguments={}})}.
[2024-11-20 12:18:25,015] Setting runner state to not executing step.
[2024-11-20 12:18:25,015] Waiting for next step.
[2024-11-20 12:18:25,016] Finished executing step. StepId{accountUuid={c8a2a3b3-90b0-4278-882e-9aef13704321}, repositoryUuid={cf74d650-59f4-48ce-ac41-67897fcfc38d}, pipelineUuid={6aaf5109-f4d0-4852-92ea-79ca7458ffec}, stepUuid={4afec5e1-57ce-4017-9686-26196acb1bc8}}
[2024-11-20 12:18:36,458] Updating runner state to "ONLINE".
What I've tried:
- Creating new runners -> same issue.
- Updating Java -> same issue.
- Installing runners on another machine -> same issue.
It is easy to replicate with this pipeline:
definitions:
steps:
- step: &Build
name: build
runs-on:
- 'self.hosted'
- 'windows'
artifacts:
- "build/**"
script:
- $size = 250MB
- New-Item -Path "build" -ItemType Directory
- $filePath = "build\artifact.dat"
- $random = New-Object System.Security.Cryptography.RNGCryptoServiceProvider
- $data = New-Object byte[] $size
- $random.GetBytes($data)
- '[System.IO.File]::WriteAllBytes($filePath, $data)'
pipelines:
default:
- step: *Build
Hi everyone,
We recently adjusted the runner timeout to 10 seconds, which we believe was the main reason some artifact uploads failed on slower connections. We have released a new runner version, 3.7.0, to increase the timeout. Please upgrade to this version and let us know how it works.
Regards,
Syahrul
Hi everyone,
We recently adjusted the runner timeout to 10 seconds, which we believe was the main reason some artifact uploads failed on slower connections. We have released a new runner version, 3.7.0, to increase the timeout. Please upgrade to this version and let us know how it works.
Regards,
Syahrul
Hi,
I tested in a few runners and the issue persists.
My connection isn't slow, but artifacts are greater than 50MB. This happens when uploading the cache, too. For example, the maven cache repository fails to upload or takes too long to upload.
Small artifacts work without any problem. Runners before version 3 work very well on artifacts and cache upload.
I downgraded all of my runners to version 3.1.0 and for now works fine. I'll run more pipelines to check if it's stable but I think so.
My runner logs v3.7.0:
[2024-11-22 11:22:51,601] [d8d72309-1, L:/172.17.0.8:41398 - R:micros--prod-east--bitbucketci-file-service--files.s3.amazonaws.com/3.5.12.192:443] The connection observed an error
io.netty.handler.timeout.ReadTimeoutException: null
[2024-11-22 11:22:51,605] [d49c5aa7-1, L:/172.17.0.8:41372 - R:micros--prod-east--bitbucketci-file-service--files.s3.amazonaws.com/3.5.12.192:443] The connection observed an error
io.netty.handler.timeout.ReadTimeoutException: null
[2024-11-22 11:22:51,606] [a590d42e-1, L:/172.17.0.8:41362 - R:micros--prod-east--bitbucketci-file-service--files.s3.amazonaws.com/3.5.12.192:443] The connection observed an error
io.netty.handler.timeout.ReadTimeoutException: null
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Hi @Tiago Jesus,
I cannot reproduce the issue with version 3.7.0 of the runner. I ran a few builds with an artifact that has a size of 500 MB and artifact upload and download was successful.
Since your workspace is on a paid billing plan, I suggest creating a ticket with the support team for further investigation. You can create a ticket via https://support.atlassian.com/contact/#/, in "What can we help you with?" select "Technical issues and bugs" and then Bitbucket Cloud as product. When you are asked to provide the workspace URL, please make sure you enter the URL of the workspace that is on a paid billing plan to proceed with ticket creation.
Please provide in the ticket the URL of a failed build with version 3.7.0 of the runner and the runner log entries from the moment the artifact uploads starts until you get an error. A support ticket can be accessed only by you and Atlassian staff, so anything you post there won't be publicly visible.
Kind regards,
Theodora
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
I updated runners again to version 3.7.0 and made a few changes in the firewall with our network team, for now, it's more stable.
We'll run the runners this week on version 3.7.0 and monitor if it works well, if not I'll submit a ticket with more information.
Thanks for your help.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Thank you. Unfortunately I cannot execute the runner on Windows because the startup script is not signed anymore:
Downloaded it again and now it's working.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Anyone else still having this issue? I'm already running version 3.7.0 but still getting the logs as sent in the original post. We've already allowed the IPs and the S3 URL from our firewall.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Got this issue with windows as well, and what I did was to just increase the S3 read timeout and it worked for me. Still monitoring on our end, but so far it's looking good.
Note: you can only adjust read timeout on version 3.7.0
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
I have also tried increasing the S3 read timeout to 60 seconds by adding the parameter in the start script but hasn't resolved the issue. Thinking it could be firewall issues even after allowlisting as stated by the documentation.
But to how long have you increase the S3 read timeout in your implementation?
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
@Syahrul
Please share which parameter is responsible for increasing the timeout and where to change it. Thank you!
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
@Andrew Skorina You can do so by modifying the runner start script with this parameter:
--s3ReadTimeoutSeconds <value>
If you're using a docker-based runner, you can add this in the docker run command:
-e S3_READ_TIMEOUT_SECONDS=<secondvalue>
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Adding this parameter -s3ReadTimeoutSeconds 70 to v3.7.0 of the runner fixed the issue of uploading an artifact of ~260 MB.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Word from support is to roll back to 3.1.0 runners for the time being until they address the issue. Sounds like a change was made to how they timeout for file uploads.
Ran some tests here with 3.1.0 and things are looking good on our end... YMMV though
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Same here! I downgraded to version 3.1.0 and its stable for now.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Same here for me... docker linux runner (self hosted) w/ an artifact of 16.7MB... oddly, we have another artifact in a different step that is ~10MB and that's fine.
I checked the logs on my runner and am getting the same type of messages as the OP
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
We were experiencing the same issue as described in this thread and worked with Atlassian support for a resolution. They noted there was an issue with v3.6.0 and provided a fix with v3.7.0 which resolved the issue for our team.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Version 3.7.0 was released which fixes the timeout issue. After upgrading our issue went away.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Rolled back to 2.6.0 runner and everything works fine. Going to ignore the annoying warning indicator on the pipeline steps for using old runners
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Same here!! It's annoying... Because I think on the new version they use S3 buckets like they said on the upgrade required doc https://www.atlassian.com/blog/bitbucket/bitbucket-pipelines-runner-upgrade-required
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Same here on Docker linux runners artifacts size greater than 50MB.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.