Forums

Articles
Create
cancel
Showing results for 
Search instead for 
Did you mean: 

REPEAT: What does "Error occurred whilst processing an artifact" mean? AGAIN !

Felipe Rodriguez
Contributor
November 19, 2024

Hello team,

 

Today I'm getting the same message as the colleague here =>

https://community.atlassian.com/t5/Bitbucket-questions/What-does-quot-Error-occurred-whilst-processing-an-artifact-quot/qaq-p/2860345?utm_source=dm&utm_medium=unpaid-social&utm_campaign=P:online*O:community*I:social_share*  

Another user has commented the same thing today.... I don't know if it is a coincidence.

I have looked at several S3 limitation guides, and I am within the GB limits, no changes have been made to the firewall rules, everything worked until today.

The logs don't tell me anything convincing either, if I can pass the logs to someone from the atlassian team that would be great.

 

Thanks, 

5 answers

1 vote
Syahrul
Atlassian Team
Atlassian Team members are employees working across the company in a wide variety of roles.
November 21, 2024

Hi everyone,

We recently adjusted the runner timeout to 10 seconds, which we believe was the main reason some artifact uploads failed on slower connections. We have released a new runner version, 3.7.0, to increase the timeout. Please upgrade to this version and let us know how it works.

Regards,
Syahrul

Tiago Jesus
Contributor
November 22, 2024 edited

Hi, 

I tested in a few runners and the issue persists.

My connection isn't slow, but artifacts are greater than 50MB. This happens when uploading the cache, too. For example, the maven cache repository fails to upload or takes too long to upload.

 

Small artifacts work without any problem. Runners before version 3 work very well on artifacts and cache upload.

 

I downgraded all of my runners to version 3.1.0 and for now works fine. I'll run more pipelines to check if it's stable but I think so.

 

My runner logs v3.7.0:

[2024-11-22 11:22:51,601] [d8d72309-1, L:/172.17.0.8:41398 - R:micros--prod-east--bitbucketci-file-service--files.s3.amazonaws.com/3.5.12.192:443] The connection observed an error
io.netty.handler.timeout.ReadTimeoutException: null
[2024-11-22 11:22:51,605] [d49c5aa7-1, L:/172.17.0.8:41372 - R:micros--prod-east--bitbucketci-file-service--files.s3.amazonaws.com/3.5.12.192:443] The connection observed an error
io.netty.handler.timeout.ReadTimeoutException: null
[2024-11-22 11:22:51,606] [a590d42e-1, L:/172.17.0.8:41362 - R:micros--prod-east--bitbucketci-file-service--files.s3.amazonaws.com/3.5.12.192:443] The connection observed an error
io.netty.handler.timeout.ReadTimeoutException: null

0 votes
Tiago Jesus
Contributor
November 20, 2024 edited

I needed to downgrade all my runners to version 2 to work fine! Is there any update on this?

0 votes
Tiago Jesus
Contributor
November 19, 2024

I revert some runners to use version 2, image: docker-public.packages.atlassian.com/sox/atlassian/bitbucket-pipelines-runner:2, and works well.

So my point is that the new version may have some issues.

0 votes
Tiago Jesus
Contributor
November 19, 2024

No changes were made to our firewalls and rules. Yesterday I updated all the runners and worked fine, but today has been struggle.

0 votes
Tiago Jesus
Contributor
November 19, 2024 edited

We need an update about this!! All runners and deploys all not working because of this!

Suggest an answer

Log in or Sign up to answer
DEPLOYMENT TYPE
CLOUD
PRODUCT PLAN
STANDARD
PERMISSIONS LEVEL
Product Admin Site Admin
TAGS
AUG Leaders

Atlassian Community Events