Create
cancel
Showing results for 
Search instead for 
Did you mean: 
Sign up Log in
Celebration

Earn badges and make progress

You're on your way to the next level! Join the Kudos program to earn points and save your progress.

Deleted user Avatar
Deleted user

Level 1: Seed

25 / 150 points

Next: Root

Avatar

1 badge earned

Collect

Participate in fun challenges

Challenges come and go, but your rewards stay with you. Do more to earn more!

Challenges
Coins

Gift kudos to your peers

What goes around comes around! Share the love by gifting kudos to your peers.

Recognition
Ribbon

Rise up in the ranks

Keep earning points to reach the top of the leaderboard. It resets every quarter so you always have a chance!

Leaderboard

Come for the products,
stay for the community

The Atlassian Community can help you and your team get more value out of Atlassian products and practices.

Atlassian Community about banner
4,456,617
Community Members
 
Community Events
176
Community Groups

AWS S3 Deploy always fails

Edited

Hi. I have a project running with Gradle to execute some automation tests. Then I  want to upload the generated serenity reports to a S3 bucket. When I limit the number of tests to run to 3, the pipeline execution ends with success. When I run the pipeline with all the tests (12) the step to upload to S3 always fails, even when the entire report has been uploaded to the bucket, which is curious. In this case the folder size is about 55Mb and the number of files is about 1300.

This is my pipeline script:

image:
name: markhobson/maven-chrome:jdk-11

pipelines:
default:
- step:
name: Run acceptance tests
caches:
- gradle
script:
- LC_ALL=es_AR.UTF-8
- LANG=es_AR.UTF-8
- LANGUAGE=es_AR.UTF-8
- ./gradlew clean test --no-watch-fs || true
artifacts:
- target/site/serenity/**
- step:
name: Upload report to S3
script:
- export S3_FOLDER=$BITBUCKET_BRANCH"/"$(date +%Y%m%d/%H%M%S)
- export S3_FULL_PATH=$AWS_S3_BUCKET/$S3_FOLDER
- pipe: atlassian/aws-s3-deploy:1.1.0
variables:
AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY
AWS_DEFAULT_REGION: 'eu-west-1'
S3_BUCKET: $S3_FULL_PATH
LOCAL_PATH: 'target/site/serenity'
ACL: 'bucket-owner-full-control'

The user what I'm using to perform the S3 upload has the role AmazonS3FullAccess:

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:*",
"s3-object-lambda:*"
],
"Resource": "*"
}
]
}


Here is the log that I can see in BitBucket when the step fails:


Completed 55.0 MiB/55.2 MiB (899.7 KiB/s) with 2 file(s) remaining

upload: target/site/serenity/summary.txt to s3://qa.automation.halltels.com/develop/20220208/133832/summary.txt
Completed 55.0 MiB/55.2 MiB (899.7 KiB/s) with 1 file(s) remaining
Completed 55.2 MiB/55.2 MiB (902.4 KiB/s) with 1 file(s) remaining
upload: target/site/serenity/scripts/jquery.js to s3://qa.automation.halltels.com/develop/20220208/133832/scripts/jquery.js
x Deployment failed.

 And here is an extract of the logs running with debug option:

Completed 53.2 MiB/53.5 MiB (784.1 KiB/s) with 2 file(s) remaining
Completed 53.3 MiB/53.5 MiB (785.1 KiB/s) with 2 file(s) remaining
upload: target/site/serenity/scripts/jquery-1.11.1.min.js to s3://qa.automation.halltels.com/feature/HTL-447/20220208/114328/scripts/jquery-1.11.1.min.js
2022-02-08 11:44:45,857 - ThreadPoolExecutor-0_9 - urllib3.connectionpool - DEBUG - https://s3.eu-west-1.amazonaws.com:443 "PUT /qa.automation.halltels.com/feature/HTL-447/20220208/114328/scripts/jquery.js HTTP/1.1" 200 0
2022-02-08 11:44:45,857 - ThreadPoolExecutor-0_9 - botocore.parsers - DEBUG - Response headers: {'x-amz-id-2': 'PHuzRX4VE4I0sz0fSECIgdGRZ89Rb/Vqm2ydHU+f+vZEIh7ni5OA4lj4/cDjoBnvMBriWM0cF3g=', 'x-amz-request-id': 'NE0K9ZE90W7XWYVA', 'Date': 'Tue, 08 Feb 2022 11:44:46 GMT', 'ETag': '"6c79d34c1f96ea4b6f870bc65d1239be"', 'Server': 'AmazonS3', 'Content-Length': '0'}
2022-02-08 11:44:45,857 - ThreadPoolExecutor-0_9 - botocore.parsers - DEBUG - Response body:
b''
2022-02-08 11:44:45,858 - ThreadPoolExecutor-0_9 - botocore.hooks - DEBUG - Event needs-retry.s3.PutObject: calling handler <bound method RetryHandler.needs_retry of <botocore.retries.standard.RetryHandler object at 0x7f5004decca0>>
2022-02-08 11:44:45,858 - ThreadPoolExecutor-0_9 - botocore.retries.standard - DEBUG - Not retrying request.
2022-02-08 11:44:45,858 - ThreadPoolExecutor-0_9 - botocore.hooks - DEBUG - Event needs-retry.s3.PutObject: calling handler <bound method S3RegionRedirector.redirect_from_error of <botocore.utils.S3RegionRedirector object at 0x7f5004decd00>>
2022-02-08 11:44:45,858 - ThreadPoolExecutor-0_9 - botocore.hooks - DEBUG - Event after-call.s3.PutObject: calling handler <function enhance_error_msg at 0x7f5005b54af0>
2022-02-08 11:44:45,858 - ThreadPoolExecutor-0_9 - botocore.hooks - DEBUG - Event after-call.s3.PutObject: calling handler <bound method RetryQuotaChecker.release_retry_quota of <botocore.retries.standard.RetryQuotaChecker object at 0x7f5004dec7f0>>
2022-02-08 11:44:45,858 - ThreadPoolExecutor-0_9 - s3transfer.utils - DEBUG - Releasing acquire 3674/None
Completed 53.3 MiB/53.5 MiB (785.1 KiB/s) with 1 file(s) remaining
Completed 53.5 MiB/53.5 MiB (787.6 KiB/s) with 1 file(s) remaining
upload: target/site/serenity/scripts/jquery.js to s3://qa.automation.halltels.com/feature/HTL-447/20220208/114328/scripts/jquery.js
2022-02-08 11:44:45,861 - Thread-1 - awscli.customizations.s3.results - DEBUG - Shutdown request received in result processing thread, shutting down result thread.
+ status=2
+ set -e
+ [[ 2 -eq 0 ]]
+ fail 'Deployment failed.'
+ echo -e '\e[31m✖ Deployment failed.\e[0m'
✖ Deployment failed.
+ exit 1

Any help will be appreciated.

1 answer

1 accepted

2 votes
Answer accepted

Hi team.

According to the AWS docs, the exit status eq 2 is related to one or more files marked for transfer were skipped during the transfer process, ie. file not exists, file not readable, etc.

Examining my logs further I found at least eight files with the warning "File/Directory not readable". The solution was to perform a chmod 777 to the source folder and then pipeline ended succesfully!

Cheers.

Suggest an answer

Log in or Sign up to answer
TAGS

Atlassian Community Events