Since 2024/11/03 (3 November 2024) I started seeing "System error" on most of my pipelines. On the pipeline's page, it shows "Error occurred whilst processing an artifact" without further information. This happens at the end of any successful step that tries to upload artifacts.
I'm using self-hosted runners, running via Docker on Linux VMs. Nothing in my bitbucket-pipelines.yml or in my infrastructure changed recently. This is happening across multiple repositories and multiple runners on multiple VMs from different providers, so I don't think it's a network issue, or an issue with a specific repository.
I'm using the latest runner version, which is 3.1.0 at the time of writing.
The Bitbucket status page shows no outages at the time of writing: https://status.bitbucket.org/
Searching for this error message on https://jira.atlassian.com/issues yields three closed issues which are not of much help:
Any idea what this message means, and/or how to address it?
Hi Rudolf,
I have seen this error reported in cases where some of the IPs needed for the runner to work were not allowlisted, so that is one possible reason.
/tmp/b609961f-891e-c872-c36b-f3f2c315d186/runner.log
If the VMs are behind a firewall, the IPs that you need to allowlist are:
Kind regards,
Theodora
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
I posted an answer, edited it, and when I clicked save, my answer was gone. So there is an issue with these forums, too. Maybe it's a caching issues? I'll re-post it later if it doesn't come back by itself...
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Thanks for the response!
1. There is no firewall for outgoing connections on any of the runners.
2. More info below.
3. I get the same error on all runners.
Some of the uploads succees, but the web interface keeps waiting on this screen until some kind of timeout is exceeded (which seems to be much longer than the max-time of 10 minutes that I set for the specific step):
Here is the relevant part from the appropriate runner.log (slightly redacted): https://pastebin.com/ddYznXg3
To help you correlate the logs with my screenshot: The web interface output was stuck at line 16 (Uploading artifact of 64 MiB)from around 2024-11-07 12:24 to 2024-11-07 12:42. It finally showed the failure around 12:42
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Hi Rudolf,
Thank you for the info. I see your answer that the builds started working again on 7th November, thank you for that update as well.
Looking at the runner log you shared, I see that it's 10 minutes from the line Updating step progress to UPLOADING_ARTIFACTS until the first exception occurs. We have a 10 minutes timeout for artifact upload and if the upload doesn't finish within that time window (e.g. due to a slow connection) then the build will fail.
I wasn't able to reproduce such an error before my first answer to your question, but it could still have been an issue on our side affecting only certain builds. If you encounter any issues again, please feel free to reach out.
Kind regards,
Theodora
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Hi Theodora,
I am experiencing the same issue with uploading artifacts of 62 MiB.
While you mentioned a 10-minute timeout for artifact uploads, the pipeline is failing in under 5 minutes.
However, when I tried uploading smaller artifacts, around 1 KB, it worked successfully.
Is there any way to address this issue?
Regards,
Gagan
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Hi Gagan,
The timeout with the latest version of the self-hosted runners is 30 seconds. If you use the latest version of a self-hosted runner, you can configure this timeout by adjusting the preconfigured command you use to start the runner.
If you use Docker-based runners, please add this to the command that starts the runner
-e S3_READ_TIMEOUT_SECONDS=<secondsvalue>
For Linux-Shell and MacOS runners, please add this to the command that starts the runner
--s3ReadTimeoutSeconds <secondsvalue>
For Windows runners, please add this to the command that starts the runner
-s3ReadTimeoutSeconds "<secondsvalue>"
Replace <secondsvalue> with a value in seconds based on how long you estimate that the artifact upload will take. E.g., you can start with 600 (equivalent of 10 minutes) and adjust it to a lower or higher value if needed.
If you still experience issues, please create a new question in community via https://community.atlassian.com/t5/forums/postpage/board-id/bitbucket-questions and we will look into it.
Kind regards,
Theodora
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
We are still having the issue with uploading artifacts in the macOS runner 3.15.0 since your migration to S3.
Every pipeline with an artifact over ~30MB runs into the error "Error occurred whilst processing an artifact".
According to https://support.atlassian.com/bitbucket-cloud/docs/use-artifacts-in-steps/, artifact sizes up to 1GB must be supported.
Please fix the issue.
Runner log:
[2025-01-17 12:57:33,473] Updating step progress to BUILDING.
[2025-01-17 12:57:33,744] Generating build script.
[2025-01-17 12:57:33,752] Adding log file: /Users/<redacted>/atlassian-bitbucket-pipelines-runner/bin/../temp/<redacted>/tmp/build<redacted>.log
[2025-01-17 12:57:33,752] Executing build script in native script.
[2025-01-17 12:57:33,921] Script exited with exit code: 0
[2025-01-17 12:57:33,969] Not uploading caches. (numberOfCaches: 0, resultOrError: PASSED)
[2025-01-17 12:57:33,969] Updating step progress to UPLOADING_ARTIFACTS.
[2025-01-17 12:57:33,987] Appending log line to main log.
[2025-01-17 12:57:35,011] Appending log line to main log.
[2025-01-17 12:57:35,096] Initiating artifact upload.
[2025-01-17 12:57:35,412] Successfully got total chunks FileChunksInfo{dataSize=41956183B, totalChunks=1}.
[2025-01-17 12:57:35,413] Uploading 1 chunks to s3
[2025-01-17 12:57:35,414] Getting s3 upload urls for artifact.
[2025-01-17 12:57:35,977] Appending log line to main log.
[2025-01-17 12:57:56,684] Updating runner state to "ONLINE".
[2025-01-17 12:58:06,434] [4c11126e-1, L:/192.168.178.23:64994 - R:micros--prod-east--bitbucketci-file-service--files.s3.amazonaws.com/3.5.24.118:443] The connection observed an error
io.netty.handler.timeout.ReadTimeoutException: null
[2025-01-17 12:58:26,560] Updating runner state to "ONLINE".
[2025-01-17 12:58:38,815] [4bd4ab60-1, L:/192.168.178.23:65017 - R:micros--prod-east--bitbucketci-file-service--files.s3.amazonaws.com/54.231.130.33:443] The connection observed an error
io.netty.handler.timeout.ReadTimeoutException: null
[2025-01-17 12:58:56,691] Updating runner state to "ONLINE".
[2025-01-17 12:59:13,125] [387e1f51-1, L:/192.168.178.23:65040 - R:micros--prod-east--bitbucketci-file-service--files.s3.amazonaws.com/52.216.250.100:443] The connection observed an error
io.netty.handler.timeout.ReadTimeoutException: null
[2025-01-17 12:59:26,687] Updating runner state to "ONLINE".
[2025-01-17 12:59:51,473] [9f7ce297-1, L:/192.168.178.23:65067 - R:micros--prod-east--bitbucketci-file-service--files.s3.amazonaws.com/3.5.29.228:443] The connection observed an error
io.netty.handler.timeout.ReadTimeoutException: null
[2025-01-17 12:59:51,475] Error while uploading file to s3
io.netty.handler.timeout.ReadTimeoutException: null
Wrapped by: org.springframework.web.reactive.function.client.WebClientRequestException: nested exception is io.netty.handler.timeout.ReadTimeoutException
at org.springframework.web.reactive.function.client.ExchangeFunctions$DefaultExchangeFunction.lambda$wrapException$9(ExchangeFunctions.java:141)
Suppressed: reactor.core.publisher.FluxOnAssembly$OnAssemblyException:
Error has been observed at the following site(s):
*__checkpoint ⇢ Request to PUT https://micros--prod-east--bitbucketci-file-service--files.s3.amazonaws.com/artifact/%7B<redacted>%7D/%7B<redacted>%7D/%7B<redacted>%7D/artifact_%7B04f13172-236d-5d66-9e3b-eb225e7af0a2%7D.tar.gz?partNumber=1&uploadId=<redacted>&X-Amz-Security-Token=<redacted>&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Date=20250117T115735Z&X-Amz-SignedHeaders=host&X-Amz-Credential=<redacted>&X-Amz-Signature=<redacted> [DefaultWebClient]
Original Stack Trace:
at org.springframework.web.reactive.function.client.ExchangeFunctions$DefaultExchangeFunction.lambda$wrapException$9(ExchangeFunctions.java:141)
at reactor.core.publisher.MonoErrorSupplied.subscribe(MonoErrorSupplied.java:55)
at reactor.core.publisher.Mono.subscribe(Mono.java:4491)
at reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onError(FluxOnErrorResume.java:103)
at reactor.core.publisher.FluxPeek$PeekSubscriber.onError(FluxPeek.java:222)
at reactor.core.publisher.FluxPeek$PeekSubscriber.onError(FluxPeek.java:222)
at reactor.core.publisher.FluxPeek$PeekSubscriber.onError(FluxPeek.java:222)
at reactor.core.publisher.MonoNext$NextSubscriber.onError(MonoNext.java:93)
at org.springframework.cloud.sleuth.instrument.reactor.ScopePassingSpanSubscriber.onError(ScopePassingSpanSubscriber.java:96)
at reactor.core.publisher.MonoFlatMapMany$FlatMapManyMain.onError(MonoFlatMapMany.java:204)
at reactor.core.publisher.SerializedSubscriber.onError(SerializedSubscriber.java:124)
at reactor.core.publisher.FluxRetryWhen$RetryWhenMainSubscriber.whenError(FluxRetryWhen.java:225)
at reactor.core.publisher.FluxRetryWhen$RetryWhenOtherSubscriber.onError(FluxRetryWhen.java:274)
at org.springframework.cloud.sleuth.instrument.reactor.ScopePassingSpanSubscriber.onError(ScopePassingSpanSubscriber.java:96)
at reactor.core.publisher.FluxContextWrite$ContextWriteSubscriber.onError(FluxContextWrite.java:121)
at reactor.core.publisher.FluxConcatMap$ConcatMapImmediate.drain(FluxConcatMap.java:415)
at reactor.core.publisher.FluxConcatMap$ConcatMapImmediate.onNext(FluxConcatMap.java:251)
at org.springframework.cloud.sleuth.instrument.reactor.ScopePassingSpanSubscriber.onNext(ScopePassingSpanSubscriber.java:89)
at reactor.core.publisher.FluxContextWrite$ContextWriteSubscriber.onNext(FluxContextWrite.java:107)
at reactor.core.publisher.EmitterProcessor.drain(EmitterProcessor.java:537)
at reactor.core.publisher.EmitterProcessor.tryEmitNext(EmitterProcessor.java:343)
at reactor.core.publisher.SinkManySerialized.tryEmitNext(SinkManySerialized.java:100)
at reactor.core.publisher.InternalManySink.emitNext(InternalManySink.java:27)
at reactor.core.publisher.FluxRetryWhen$RetryWhenMainSubscriber.onError(FluxRetryWhen.java:190)
at org.springframework.cloud.sleuth.instrument.reactor.ScopePassingSpanSubscriber.onError(ScopePassingSpanSubscriber.java:96)
at reactor.core.publisher.MonoCreate$DefaultMonoSink.error(MonoCreate.java:201)
at reactor.netty.http.client.HttpClientConnect$HttpObserver.onUncaughtException(HttpClientConnect.java:403)
at reactor.netty.ReactorNetty$CompositeConnectionObserver.onUncaughtException(ReactorNetty.java:700)
at reactor.netty.resources.DefaultPooledConnectionProvider$DisposableAcquire.onUncaughtException(DefaultPooledConnectionProvider.java:211)
at reactor.netty.resources.DefaultPooledConnectionProvider$PooledConnection.onUncaughtException(DefaultPooledConnectionProvider.java:464)
at reactor.netty.channel.FluxReceive.drainReceiver(FluxReceive.java:247)
at reactor.netty.channel.FluxReceive.onInboundError(FluxReceive.java:468)
at reactor.netty.channel.ChannelOperations.onInboundError(ChannelOperations.java:508)
at reactor.netty.channel.ChannelOperationsHandler.exceptionCaught(ChannelOperationsHandler.java:153)
at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:346)
at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:325)
at io.netty.channel.AbstractChannelHandlerContext.fireExceptionCaught(AbstractChannelHandlerContext.java:317)
at io.netty.handler.timeout.ReadTimeoutHandler.readTimedOut(ReadTimeoutHandler.java:98)
at io.netty.handler.timeout.ReadTimeoutHandler.channelIdle(ReadTimeoutHandler.java:90)
at io.netty.handler.timeout.IdleStateHandler$ReaderIdleTimeoutTask.run(IdleStateHandler.java:525)
at io.netty.handler.timeout.IdleStateHandler$AbstractIdleTask.run(IdleStateHandler.java:497)
at io.netty.util.concurrent.PromiseTask.runTask(PromiseTask.java:98)
at io.netty.util.concurrent.ScheduledFutureTask.run(ScheduledFutureTask.java:156)
at io.netty.util.concurrent.AbstractEventExecutor.runTask(AbstractEventExecutor.java:173)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:166)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:566)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.base/java.lang.Thread.run(Thread.java:840)
Wrapped by: com.atlassian.pipelines.runner.core.exception.S3UploadException: Failed to upload chunk, part number 1
at com.atlassian.pipelines.runner.core.util.file.upload.S3MultiPartUploaderImpl.lambda$uploadChunk$16(S3MultiPartUploaderImpl.java:167)
at io.reactivex.internal.operators.single.SingleResumeNext$ResumeMainSingleObserver.onError(SingleResumeNext.java:73)
at io.reactivex.internal.operators.flowable.FlowableSingleSingle$SingleElementSubscriber.onError(FlowableSingleSingle.java:97)
at io.reactivex.subscribers.SerializedSubscriber.onError(SerializedSubscriber.java:142)
at io.reactivex.internal.operators.flowable.FlowableRepeatWhen$WhenReceiver.onError(FlowableRepeatWhen.java:112)
at io.reactivex.internal.operators.flowable.FlowableFlatMap$MergeSubscriber.checkTerminate(FlowableFlatMap.java:572)
at io.reactivex.internal.operators.flowable.FlowableFlatMap$MergeSubscriber.drainLoop(FlowableFlatMap.java:379)
at io.reactivex.internal.operators.flowable.FlowableFlatMap$MergeSubscriber.drain(FlowableFlatMap.java:371)
at io.reactivex.internal.operators.flowable.FlowableFlatMap$MergeSubscriber.innerError(FlowableFlatMap.java:611)
at io.reactivex.internal.operators.flowable.FlowableFlatMap$InnerSubscriber.onError(FlowableFlatMap.java:677)
at io.reactivex.internal.subscriptions.EmptySubscription.error(EmptySubscription.java:55)
at io.reactivex.internal.operators.flowable.FlowableError.subscribeActual(FlowableError.java:40)
at io.reactivex.Flowable.subscribe(Flowable.java:14935)
at io.reactivex.Flowable.subscribe(Flowable.java:14882)
at io.reactivex.internal.operators.flowable.FlowableFlatMap$MergeSubscriber.onNext(FlowableFlatMap.java:163)
at io.reactivex.internal.operators.flowable.FlowableDoOnEach$DoOnEachSubscriber.onNext(FlowableDoOnEach.java:92)
at io.reactivex.internal.operators.flowable.FlowableFlatMap$MergeSubscriber.tryEmitScalar(FlowableFlatMap.java:234)
at io.reactivex.internal.operators.flowable.FlowableFlatMap$MergeSubscriber.onNext(FlowableFlatMap.java:152)
at io.reactivex.internal.operators.flowable.FlowableZip$ZipCoordinator.drain(FlowableZip.java:249)
at io.reactivex.internal.operators.flowable.FlowableZip$ZipSubscriber.onNext(FlowableZip.java:381)
at io.reactivex.processors.UnicastProcessor.drainFused(UnicastProcessor.java:362)
at io.reactivex.processors.UnicastProcessor.drain(UnicastProcessor.java:395)
at io.reactivex.processors.UnicastProcessor.onNext(UnicastProcessor.java:457)
at io.reactivex.processors.SerializedProcessor.onNext(SerializedProcessor.java:103)
at io.reactivex.internal.operators.flowable.FlowableRepeatWhen$WhenSourceSubscriber.again(FlowableRepeatWhen.java:171)
at io.reactivex.internal.operators.flowable.FlowableRetryWhen$RetryWhenSubscriber.onError(FlowableRetryWhen.java:76)
at io.reactivex.internal.operators.single.SingleToFlowable$SingleToFlowableObserver.onError(SingleToFlowable.java:67)
at io.reactivex.internal.operators.single.SingleUsing$UsingSingleObserver.onError(SingleUsing.java:175)
at io.reactivex.internal.operators.single.SingleMap$MapSingleObserver.onError(SingleMap.java:69)
at io.reactivex.internal.operators.single.SingleMap$MapSingleObserver.onError(SingleMap.java:69)
at io.reactivex.internal.operators.single.SingleObserveOn$ObserveOnSingleObserver.run(SingleObserveOn.java:79)
at brave.propagation.CurrentTraceContext$1CurrentTraceContextRunnable.run(CurrentTraceContext.java:264)
at com.atlassian.pipelines.common.trace.rxjava.CopyMdcSchedulerHandler$CopyMdcRunnableAdapter.run(CopyMdcSchedulerHandler.java:74)
at io.reactivex.Scheduler$DisposeTask.run(Scheduler.java:608)
at brave.propagation.CurrentTraceContext$1CurrentTraceContextRunnable.run(CurrentTraceContext.java:264)
at com.atlassian.pipelines.common.trace.rxjava.CopyMdcSchedulerHandler$CopyMdcRunnableAdapter.run(CopyMdcSchedulerHandler.java:74)
at io.reactivex.internal.schedulers.ScheduledRunnable.run(ScheduledRunnable.java:66)
at io.reactivex.internal.schedulers.ScheduledRunnable.call(ScheduledRunnable.java:57)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.base/java.lang.Thread.run(Thread.java:840)
[2025-01-17 12:59:51,477] Updating step progress to PARSING_TEST_RESULTS.
[2025-01-17 12:59:51,740] Test report processing complete.
[2025-01-17 12:59:51,741] Updating step progress to COMPLETING_LOGS.
[2025-01-17 12:59:51,985] Shutting down log uploader.
[2025-01-17 12:59:51,986] Appending log line to main log.
[2025-01-17 12:59:52,298] Tearing down directories.
[2025-01-17 12:59:52,308] Cancelling timeout
[2025-01-17 12:59:52,309] Completing step with result Result{status=ERROR, error=Some(Error{key='runner.artifact.upload-error', message='Error occurred whilst processing an artifact', arguments={}})}.
[2025-01-17 12:59:52,612] Setting runner state to not executing step.
[2025-01-17 12:59:52,612] Waiting for next step.
[2025-01-17 12:59:52,612] Finished executing step. StepId{accountUuid={<redacted>}, repositoryUuid={<redacted>}, pipelineUuid={<redacted>}, stepUuid={<redacted>}}
[2025-01-17 12:59:56,694] Updating runner state to "ONLINE".
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
i'm experiencing this issue also. failing when step build teardown uploading artifacts.
using runner mac os 3.16.0.
any update? what should i do?
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
We have the same problem on our Windows and MacOS runners (3.6.0)
{code}
[2024-11-24 14:49:16,691] Updating step progress to UPLOADING_ARTIFACTS.
[2024-11-24 14:49:17,544] Appending log line to main log.
[2024-11-24 14:49:22,882] Initiating artifact upload.
[2024-11-24 14:49:23,327] Successfully got total chunks FileChunksInfo{dataSize=45640725B, totalChunks=1}.
[2024-11-24 14:49:23,328] Uploading 1 chunks to s3
[2024-11-24 14:49:23,329] Getting s3 upload urls for artifact.
[2024-11-24 14:49:23,544] Appending log line to main log.
[2024-11-24 14:49:34,600] [2a05714a-1, L:/192.168.31.254:54094 - R:micros--prod-east--bitbucketci-file-service--files.s3.amazonaws.com/52.217.198.230:443] The connection observed an error
io.netty.handler.timeout.ReadTimeoutException: null
[2024-11-24 14:49:46,078] Updating runner state to "ONLINE".
[2024-11-24 14:49:46,899] [8aae7406-1, L:/192.168.31.254:54095 - R:micros--prod-east--bitbucketci-file-service--files.s3.amazonaws.com/52.217.198.230:443] The connection observed an error
io.netty.handler.timeout.ReadTimeoutException: null
[2024-11-24 14:50:01,213] [6a6c8312-1, L:/192.168.31.254:54096 - R:micros--prod-east--bitbucketci-file-service--files.s3.amazonaws.com/52.217.198.230:443] The connection observed an error
io.netty.handler.timeout.ReadTimeoutException: null
[2024-11-24 14:50:16,078] Updating runner state to "ONLINE".
[2024-11-24 14:50:19,530] [012690b4-1, L:/192.168.31.254:54098 - R:micros--prod-east--bitbucketci-file-service--files.s3.amazonaws.com/54.231.132.217:443] The connection observed an error
io.netty.handler.timeout.ReadTimeoutException: null
[2024-11-24 14:50:19,533] Error while uploading file to s3
io.netty.handler.timeout.ReadTimeoutException: null
Wrapped by: org.springframework.web.reactive.function.client.WebClientRequestException: nested exception is io.netty.handler.timeout.ReadTimeoutException
at org.springframework.web.reactive.function.client.ExchangeFunctions$DefaultExchangeFunction.lambda$wrapException$9(ExchangeFunctions.java:141)
Suppressed: reactor.core.publisher.FluxOnAssembly$OnAssemblyException:
Error has been observed at the following site(s):
*__checkpoint _ Request to PUT https://micros--prod-east--bitbucketci-file-service--files.s3.amazonaws.com/artifact/[XXXXX]
{cpde}
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Hi Nikita,
The timeout with the latest version of the self-hosted runners is 30 seconds. If you use the latest version of a self-hosted runner, you can configure this timeout by adjusting the preconfigured command you use to start the runner.
If you use Docker-based runners, please add this to the command that starts the runner
-e S3_READ_TIMEOUT_SECONDS=<secondsvalue>
For Linux-Shell and MacOS runners, please add this to the command that starts the runner
--s3ReadTimeoutSeconds <secondsvalue>
For Windows runners, please add this to the command that starts the runner
-s3ReadTimeoutSeconds "<secondsvalue>"
Replace <secondsvalue> with a value in seconds based on how long you estimate that the artifact upload will take. E.g., you can start with 600 (equivalent of 10 minutes) and adjust it to a lower or higher value if needed.
If you still experience issues, please create a new question in community via https://community.atlassian.com/t5/forums/postpage/board-id/bitbucket-questions and we will look into it.
Kind regards,
Theodora
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
It's failing for us randomly and quite often in Windows Runners 3.6.0
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
The same thing is happening to me too on Docker Linux runners. Somethings throw read timeout exceptions on upload artifacts do S3.
I hope the bitbucket team is looking for it because some people have the same errors and related issues posted here.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Keeps happening randomly for us on Windows runners running version 3.6.0.
We have two different runners and it fails on both.
Runner logs (redacted)
[2024-11-20 14:50:16,058] Runner version: 3.6.0
[2024-11-20 14:50:16,074] Runner runtime: windows-powershell
[2024-11-20 14:50:16,230] Starting websocket listening to RUNNER_UPDATED events.
[2024-11-20 14:50:16,480] Updating runner status to "ONLINE"and checking for new steps assigned to the runner after 0 seconds and then every 30 seconds.
(..snip!..)
[2024-11-20 14:56:56,139] Initiating artifact upload.
[2024-11-20 14:56:56,483] Successfully got total chunks FileChunksInfo{dataSize=210526316B, totalChunks=5}.
[2024-11-20 14:56:56,483] Uploading 5 chunks to s3
[2024-11-20 14:56:56,483] Getting s3 upload urls for artifact.
[2024-11-20 14:56:56,561] Appending log line to main log.
[2024-11-20 14:57:07,706] [8f7d819c-1, L:/192.168.181.120:61679 - R:micros--prod-east--bitbucketci-file-service--files.s3.amazonaws.com/3.5.29.165:443] The connection observed an error
io.netty.handler.timeout.ReadTimeoutException: null
[2024-11-20 14:57:07,706] [c6db9d61-1, L:/192.168.181.120:61677 - R:micros--prod-east--bitbucketci-file-service--files.s3.amazonaws.com/3.5.29.165:443] The connection observed an error
io.netty.handler.timeout.ReadTimeoutException: null
[2024-11-20 14:57:07,722] [585ebade-1, L:/192.168.181.120:61678 - R:micros--prod-east--bitbucketci-file-service--files.s3.amazonaws.com/3.5.29.165:443] The connection observed an error
io.netty.handler.timeout.ReadTimeoutException: null
[2024-11-20 14:57:07,722] [f91a5be3-1, L:/192.168.181.120:61680 - R:micros--prod-east--bitbucketci-file-service--files.s3.amazonaws.com/3.5.29.165:443] The connection observed an error
io.netty.handler.timeout.ReadTimeoutException: null
[2024-11-20 14:57:16,511] Updating runner state to "ONLINE".
[2024-11-20 14:57:19,921] [53301cf6-1, L:/192.168.181.120:61685 - R:micros--prod-east--bitbucketci-file-service--files.s3.amazonaws.com/3.5.29.165:443] The connection observed an error
io.netty.handler.timeout.ReadTimeoutException: null
[2024-11-20 14:57:19,940] [7a306d5c-1, L:/192.168.181.120:61683 - R:micros--prod-east--bitbucketci-file-service--files.s3.amazonaws.com/3.5.29.165:443] The connection observed an error
io.netty.handler.timeout.ReadTimeoutException: null
[2024-11-20 14:57:20,018] [a7f37945-1, L:/192.168.181.120:61684 - R:micros--prod-east--bitbucketci-file-service--files.s3.amazonaws.com/3.5.29.165:443] The connection observed an error
io.netty.handler.timeout.ReadTimeoutException: null
[2024-11-20 14:57:33,920] [e639d656-2, L:/192.168.181.120:61682 - R:micros--prod-east--bitbucketci-file-service--files.s3.amazonaws.com/3.5.29.165:443] The connection observed an error
io.netty.handler.timeout.ReadTimeoutException: null
[2024-11-20 14:57:34,154] [2b1a625c-1, L:/192.168.181.120:61686 - R:micros--prod-east--bitbucketci-file-service--files.s3.amazonaws.com/3.5.29.165:443] The connection observed an error
io.netty.handler.timeout.ReadTimeoutException: null
[2024-11-20 14:57:34,201] [c43a056d-1, L:/192.168.181.120:61687 - R:micros--prod-east--bitbucketci-file-service--files.s3.amazonaws.com/3.5.29.165:443] The connection observed an error
io.netty.handler.timeout.ReadTimeoutException: null
[2024-11-20 14:57:46,500] Updating runner state to "ONLINE".
[2024-11-20 14:57:52,167] [5ad9146b-1, L:/192.168.181.120:61688 - R:micros--prod-east--bitbucketci-file-service--files.s3.amazonaws.com/16.182.106.49:443] The connection observed an error
io.netty.handler.timeout.ReadTimeoutException: null
[2024-11-20 14:57:52,167] Error while uploading file to s3
io.netty.handler.timeout.ReadTimeoutException: null
Wrapped by: org.springframework.web.reactive.function.client.WebClientRequestException: nested exception is io.netty.handler.timeout.ReadTimeoutException
at org.springframework.web.reactive.function.client.ExchangeFunctions$DefaultExchangeFunction.lambda$wrapException$9(ExchangeFunctions.java:141)
Suppressed: reactor.core.publisher.FluxOnAssembly$OnAssemblyException:
Error has been observed at the following site(s):
*__checkpoint ⇢ Request to PUT https://micros--prod-east--bitbucketci-file-service--files.s3.amazonaws.com/artifact/%7B525438cd-fc42-41c3-a7d4-4d690b62348e%7D/%7B1f5598c8-b7dc-4de6-9730-554a2a0caef2%7D/%7Bf1259f2a-2a67-45d8-9874-6e57e1db8be1%7D/artifact_%7B8b035099-2417-56c3-b50a-4adbd7616d16%7D.tar.gz?partNumber=4&uploadId=f_J7nAmsVgrnWwO_ao05Qpd4OBdehz_Xi_n9oAV.gvzhuFXQluP6m_eiY3jo84LuJos1kgF96ttdb3b3Eskisrd.1igMZvkHSHXMW77DVS3LxATjE6ei2fNgNpweu9jCo.cqOPzmq1zRW1_jcF.bYw--&X-Amz-Security-Token=XXX [DefaultWebClient]
Original Stack Trace:
at org.springframework.web.reactive.function.client.ExchangeFunctions$DefaultExchangeFunction.lambda$wrapException$9(ExchangeFunctions.java:141)
at reactor.core.publisher.MonoErrorSupplied.subscribe(MonoErrorSupplied.java:55)
at reactor.core.publisher.Mono.subscribe(Mono.java:4491)
at reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onError(FluxOnErrorResume.java:103)
at reactor.core.publisher.FluxPeek$PeekSubscriber.onError(FluxPeek.java:222)
at reactor.core.publisher.FluxPeek$PeekSubscriber.onError(FluxPeek.java:222)
at reactor.core.publisher.FluxPeek$PeekSubscriber.onError(FluxPeek.java:222)
at reactor.core.publisher.MonoNext$NextSubscriber.onError(MonoNext.java:93)
at org.springframework.cloud.sleuth.instrument.reactor.ScopePassingSpanSubscriber.onError(ScopePassingSpanSubscriber.java:96)
at reactor.core.publisher.MonoFlatMapMany$FlatMapManyMain.onError(MonoFlatMapMany.java:204)
at reactor.core.publisher.SerializedSubscriber.onError(SerializedSubscriber.java:124)
at reactor.core.publisher.FluxRetryWhen$RetryWhenMainSubscriber.whenError(FluxRetryWhen.java:225)
at reactor.core.publisher.FluxRetryWhen$RetryWhenOtherSubscriber.onError(FluxRetryWhen.java:274)
at org.springframework.cloud.sleuth.instrument.reactor.ScopePassingSpanSubscriber.onError(ScopePassingSpanSubscriber.java:96)
at reactor.core.publisher.FluxContextWrite$ContextWriteSubscriber.onError(FluxContextWrite.java:121)
at reactor.core.publisher.FluxConcatMap$ConcatMapImmediate.drain(FluxConcatMap.java:415)
at reactor.core.publisher.FluxConcatMap$ConcatMapImmediate.onNext(FluxConcatMap.java:251)
at org.springframework.cloud.sleuth.instrument.reactor.ScopePassingSpanSubscriber.onNext(ScopePassingSpanSubscriber.java:89)
at reactor.core.publisher.FluxContextWrite$ContextWriteSubscriber.onNext(FluxContextWrite.java:107)
at reactor.core.publisher.EmitterProcessor.drain(EmitterProcessor.java:537)
at reactor.core.publisher.EmitterProcessor.tryEmitNext(EmitterProcessor.java:343)
at reactor.core.publisher.SinkManySerialized.tryEmitNext(SinkManySerialized.java:100)
at reactor.core.publisher.InternalManySink.emitNext(InternalManySink.java:27)
at reactor.core.publisher.FluxRetryWhen$RetryWhenMainSubscriber.onError(FluxRetryWhen.java:190)
at org.springframework.cloud.sleuth.instrument.reactor.ScopePassingSpanSubscriber.onError(ScopePassingSpanSubscriber.java:96)
at reactor.core.publisher.MonoCreate$DefaultMonoSink.error(MonoCreate.java:201)
at reactor.netty.http.client.HttpClientConnect$HttpObserver.onUncaughtException(HttpClientConnect.java:403)
at reactor.netty.ReactorNetty$CompositeConnectionObserver.onUncaughtException(ReactorNetty.java:700)
at reactor.netty.resources.DefaultPooledConnectionProvider$DisposableAcquire.onUncaughtException(DefaultPooledConnectionProvider.java:211)
at reactor.netty.resources.DefaultPooledConnectionProvider$PooledConnection.onUncaughtException(DefaultPooledConnectionProvider.java:464)
at reactor.netty.channel.FluxReceive.drainReceiver(FluxReceive.java:247)
at reactor.netty.channel.FluxReceive.onInboundError(FluxReceive.java:468)
at reactor.netty.channel.ChannelOperations.onInboundError(ChannelOperations.java:508)
at reactor.netty.channel.ChannelOperationsHandler.exceptionCaught(ChannelOperationsHandler.java:145)
at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:346)
at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:325)
at io.netty.channel.AbstractChannelHandlerContext.fireExceptionCaught(AbstractChannelHandlerContext.java:317)
at io.netty.handler.timeout.ReadTimeoutHandler.readTimedOut(ReadTimeoutHandler.java:98)
at io.netty.handler.timeout.ReadTimeoutHandler.channelIdle(ReadTimeoutHandler.java:90)
at io.netty.handler.timeout.IdleStateHandler$ReaderIdleTimeoutTask.run(IdleStateHandler.java:525)
at io.netty.handler.timeout.IdleStateHandler$AbstractIdleTask.run(IdleStateHandler.java:497)
at io.netty.util.concurrent.PromiseTask.runTask(PromiseTask.java:98)
at io.netty.util.concurrent.ScheduledFutureTask.run(ScheduledFutureTask.java:153)
at io.netty.util.concurrent.AbstractEventExecutor.runTask(AbstractEventExecutor.java:173)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:166)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:470)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:569)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.base/java.lang.Thread.run(Thread.java:829)
Wrapped by: com.atlassian.pipelines.runner.core.exception.S3UploadException: Failed to upload chunk, part number 4
at com.atlassian.pipelines.runner.core.util.file.upload.S3MultiPartUploaderImpl.lambda$uploadChunk$16(S3MultiPartUploaderImpl.java:167)
at io.reactivex.internal.operators.single.SingleResumeNext$ResumeMainSingleObserver.onError(SingleResumeNext.java:73)
at io.reactivex.internal.operators.flowable.FlowableSingleSingle$SingleElementSubscriber.onError(FlowableSingleSingle.java:97)
at io.reactivex.subscribers.SerializedSubscriber.onError(SerializedSubscriber.java:142)
at io.reactivex.internal.operators.flowable.FlowableRepeatWhen$WhenReceiver.onError(FlowableRepeatWhen.java:112)
at io.reactivex.internal.operators.flowable.FlowableFlatMap$MergeSubscriber.checkTerminate(FlowableFlatMap.java:572)
at io.reactivex.internal.operators.flowable.FlowableFlatMap$MergeSubscriber.drainLoop(FlowableFlatMap.java:379)
at io.reactivex.internal.operators.flowable.FlowableFlatMap$MergeSubscriber.drain(FlowableFlatMap.java:371)
at io.reactivex.internal.operators.flowable.FlowableFlatMap$MergeSubscriber.innerError(FlowableFlatMap.java:611)
at io.reactivex.internal.operators.flowable.FlowableFlatMap$InnerSubscriber.onError(FlowableFlatMap.java:677)
at io.reactivex.internal.subscriptions.EmptySubscription.error(EmptySubscription.java:55)
at io.reactivex.internal.operators.flowable.FlowableError.subscribeActual(FlowableError.java:40)
at io.reactivex.Flowable.subscribe(Flowable.java:14935)
at io.reactivex.Flowable.subscribe(Flowable.java:14882)
at io.reactivex.internal.operators.flowable.FlowableFlatMap$MergeSubscriber.onNext(FlowableFlatMap.java:163)
at io.reactivex.internal.operators.flowable.FlowableDoOnEach$DoOnEachSubscriber.onNext(FlowableDoOnEach.java:92)
at io.reactivex.internal.operators.flowable.FlowableFlatMap$MergeSubscriber.tryEmitScalar(FlowableFlatMap.java:234)
at io.reactivex.internal.operators.flowable.FlowableFlatMap$MergeSubscriber.onNext(FlowableFlatMap.java:152)
at io.reactivex.internal.operators.flowable.FlowableZip$ZipCoordinator.drain(FlowableZip.java:249)
at io.reactivex.internal.operators.flowable.FlowableZip$ZipSubscriber.onNext(FlowableZip.java:381)
at io.reactivex.processors.UnicastProcessor.drainFused(UnicastProcessor.java:362)
at io.reactivex.processors.UnicastProcessor.drain(UnicastProcessor.java:395)
at io.reactivex.processors.UnicastProcessor.onNext(UnicastProcessor.java:457)
at io.reactivex.processors.SerializedProcessor.onNext(SerializedProcessor.java:103)
at io.reactivex.internal.operators.flowable.FlowableRepeatWhen$WhenSourceSubscriber.again(FlowableRepeatWhen.java:171)
at io.reactivex.internal.operators.flowable.FlowableRetryWhen$RetryWhenSubscriber.onError(FlowableRetryWhen.java:76)
at io.reactivex.internal.operators.single.SingleToFlowable$SingleToFlowableObserver.onError(SingleToFlowable.java:67)
at io.reactivex.internal.operators.single.SingleUsing$UsingSingleObserver.onError(SingleUsing.java:175)
at io.reactivex.internal.operators.single.SingleMap$MapSingleObserver.onError(SingleMap.java:69)
at io.reactivex.internal.operators.single.SingleMap$MapSingleObserver.onError(SingleMap.java:69)
at io.reactivex.internal.operators.single.SingleObserveOn$ObserveOnSingleObserver.run(SingleObserveOn.java:79)
at brave.propagation.CurrentTraceContext$1CurrentTraceContextRunnable.run(CurrentTraceContext.java:264)
at com.atlassian.pipelines.common.trace.rxjava.CopyMdcSchedulerHandler$CopyMdcRunnableAdapter.run(CopyMdcSchedulerHandler.java:74)
at io.reactivex.Scheduler$DisposeTask.run(Scheduler.java:608)
at brave.propagation.CurrentTraceContext$1CurrentTraceContextRunnable.run(CurrentTraceContext.java:264)
at com.atlassian.pipelines.common.trace.rxjava.CopyMdcSchedulerHandler$CopyMdcRunnableAdapter.run(CopyMdcSchedulerHandler.java:74)
at io.reactivex.internal.schedulers.ScheduledRunnable.run(ScheduledRunnable.java:66)
at io.reactivex.internal.schedulers.ScheduledRunnable.call(ScheduledRunnable.java:57)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
[2024-11-20 14:57:52,167] Updating step progress to PARSING_TEST_RESULTS.
[2024-11-20 14:57:52,565] Appending log line to main log.
[2024-11-20 14:57:52,862] Test report processing complete.
[2024-11-20 14:57:52,862] Updating step progress to COMPLETING_LOGS.
[2024-11-20 14:57:53,141] Shutting down log uploader.
[2024-11-20 14:57:53,141] Appending log line to main log.
[2024-11-20 14:57:53,569] Tearing down directories.
[2024-11-20 14:58:04,088] Cancelling timeout
[2024-11-20 14:58:04,103] Completing step with result Result{status=ERROR, error=Some(Error{key='runner.artifact.upload-error', message='Error occurred whilst processing an artifact', arguments={}})}.
[2024-11-20 14:58:04,431] Setting runner state to not executing step.
[2024-11-20 14:58:04,431] Waiting for next step.
[2024-11-20 14:58:04,431] Finished executing step. StepId{accountUuid={59f952ed-2097-49f3-b72b-325229b41a9f}, repositoryUuid={525438cd-fc42-41c3-a7d4-4d690b62348e}, pipelineUuid={1f5598c8-b7dc-4de6-9730-554a2a0caef2}, stepUuid={f1259f2a-2a67-45d8-9874-6e57e1db8be1}}
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Same here! Is it possible to check on bitbucket side?
Yesterday, all of my runners ran without problems. But today I have this situation.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
It seems that today is the day.... The same thing is happening to me too!
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Late to the party here, but I ran into this on Windows runners I hadn't updated to 3.x. Updating the runners resolved the issue.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
For us these errors only occurred after updating to 3.x. (windows server 2019)
About 90% of the build will fail with the same error mentioned above.
I didn't check 'debug' log-level verbosity yet.
We pretty much excluded the network stack and will test fresh runner on a new VM next.
work around:
remove the 'uploading artifact' step from the pipelines and make the build available on a different repo
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
@Pier van der GraafAtlassian advise you may need to whitelist some AWS addresses after upgrading to 3.x, that might help your pipeline? https://www.atlassian.com/blog/bitbucket/bitbucket-pipelines-runner-upgrade-required
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Thanks for the suggestion, we added those earlier.
Also (temporarly) excluded hosts from UTM on Fortigate, disabled Windows Firewall. Checked by using telnet, checked for duplicate IPS, correct gateway, name resolvement and other usual suspects.
Thoroughly debugged the network stack all in all.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
I am using a Windows self-hosted runner, and keep `getting Error occurred whilst processing and artifact`.
My pipeline uploads several artifacts successfully, then my last artifact is large (590MB), but the log says no errors, but this artifact is missing from the artifacts page. It fails in ~1min, so not hitting a 10 min timeout. This has been working for months, but recently started failing.
Bitbucket please give us more useful errors.
Compressed files matching artifact pattern <redacted> to 597.2 MiB in 18 seconds
Uploading artifact of 597.2 MiB
Searching for test report files in directories named [failsafe-reports, TestResults, surefire-reports, test-reports, test-results] down to a depth of 4
Finished scanning for test reports. Found 0 test report files.
Merged test suites, total number tests is 0, with 0 failures and 0 errors.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Hi everyone,
The timeout with the latest version of the self-hosted runners is 30 seconds. If you use the latest version of a self-hosted runner, you can configure this timeout by adjusting the preconfigured command you use to start the runner.
If you use Docker-based runners, please add this to the command that starts the runner
-e S3_READ_TIMEOUT_SECONDS=<secondsvalue>
For Linux-Shell and MacOS runners, please add this to the command that starts the runner
--s3ReadTimeoutSeconds <secondsvalue>
For Windows runners, please add this to the command that starts the runner
-s3ReadTimeoutSeconds "<secondsvalue>"
Replace <secondsvalue> with a value in seconds based on how long you estimate that the artifact upload will take. E.g., you can start with 600 (equivalent of 10 minutes) and adjust it to a lower or higher value if needed.
If you still experience issues, please create a new question in community via https://community.atlassian.com/t5/forums/postpage/board-id/bitbucket-questions and we will look into it.
Kind regards,
Theodora
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Hey, if I have pipeline that builds and pushes docker image, where should I place this config?
pipeline yaml:
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Hi @kairiruutel,
My previous answer applies only to Pipelines builds that run with a self-hosted runner.
Looking at your yml file, your builds run on Atlassian's infrastructure and don't use a self-hosted runner, so the solution doesn't apply to you.
If you're having issues with your build you can either create a new community question via https://community.atlassian.com/t5/forums/postpage/board-id/bitbucket-questions or, if you have a Bitbucket workspace on a paid billing plan, you can create a support ticket via https://support.atlassian.com/contact/#/.
Kind regards,
Theodora
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
And it magically started working again on 7 November. It must have been an issue on Bitbucket's side.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Online forums and learning are now in one easy-to-use experience.
By continuing, you accept the updated Community Terms of Use and acknowledge the Privacy Policy. Your public name, photo, and achievements may be publicly visible and available in search engines.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.