Create
cancel
Showing results for 
Search instead for 
Did you mean: 
Sign up Log in

aws-s3-deploy 1.0 break upload (not happening in 0.5.0)

Leonardo Medici June 29, 2021

We have a pipeline that send our website build to a S3 bucket to serve files through CloudFront.

Today we tried to update our pipeline script to aws-s3-deploy 1.0 but this break the pipeline without a meaninful error.

Reverting to 0.5.0 fix the issue.

Here there is the interesting part of our pipeline script:

 develop:
      - step:
          name: Build
          caches:
            - node
          script:
            - npm install
            - npm run build:devel
          artifacts:
            - home/**
      - step:
          name: Deploy to S3
          deployment: Development
          script:
            - pipe: atlassian/aws-s3-deploy:1.0.0
              variables:
                S3_BUCKET: ${S3_BUCKET}/home
                LOCAL_PATH: 'home'
                ACL: 'public-read'
                EXTRA_ARGS: '--exclude=*.html'
            - pipe: atlassian/aws-s3-deploy:1.0.0
              variables:
                S3_BUCKET: ${S3_BUCKET}/home
                LOCAL_PATH: 'home'
                ACL: 'public-read'
                CACHE_CONTROL: 'max-age=600'
                EXTRA_ARGS: '--exclude=* --include=*.html'
            - pipe: atlassian/aws-s3-deploy:1.0.0
              variables:
                S3_BUCKET: ${S3_BUCKET}
                LOCAL_PATH: 'home'
                ACL: 'public-read'
                CACHE_CONTROL: 'max-age=600'
                EXTRA_ARGS: '--exclude=* --include=index.html'

And the IAM policy attached to the user on AWS:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:ListBucket"
            ],
            "Resource": [
                "arn:aws:s3:::${S3_BUCKET}"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "s3:GetObject",
                "s3:GetObjectAcl",
                "s3:GetObjectTagging",
                "s3:HeadObject",
                "s3:PutObject",
                "s3:PutObjectTagging",
                "s3:PutObjectAcl",
                "s3:DeleteObject"
            ],
            "Resource": [
                "arn:aws:s3:::${S3_BUCKET}/*"
            ]
        }
    ]
}

Here (on Dropbox due to post characters limit) there is a full log of the upload step that fails with DEBUG: 'true' option set:

https://www.dropbox.com/s/aogfbqw86vy3su2/pipelineLog-3.txt?dl=0

2 answers

1 vote
Oleksandr Kyrdan
Atlassian Team
Atlassian Team members are employees working across the company in a wide variety of roles.
June 30, 2021

Hi @Leonardo Medici ,

Thank you for your question!

With version 1.0.0 we migrate aws-cli to v2. Pipe based on docker image amazon/aws-cli:2.2.13. This version is not officially released yet, but thank you for sharing your case.

Good explanation. If you ok we'll ask you some more details soon about this case.

We're investigating this issue and will notify you.

 

Best regards,
Oleksandr Kyrdan

Leonardo Medici July 1, 2021

Of course, feel free to contact me if you need additional information.

Oleksandr Kyrdan
Atlassian Team
Atlassian Team members are employees working across the company in a wide variety of roles.
July 1, 2021

Looks like this issue relates to the changes in behavior between AWS CLI version 1 and AWS CLI version 2 

One of the reason could be locale

AWS CLI version 2 now uses environment variable to set text file encoding

By default, text files use the same encoding as the installed locale. To set encoding for text files to be different from the locale, use the AWS_CLI_FILE_ENCODING environment variable. The below example sets the CLI to open text files using UTF-8 on windows.

AWS_CLI_FILE_ENCODING=UTF-8

For more information, see Environment variables to configure the AWS CLI .

 

or other related to S3 provided in Breaking changes – Migrating from AWS CLI version 1 to version 2 docs

Also, good to check object's key naming:


When you create an object, you also specify the key name, which uniquely identifies the object in the bucket. The object key (or key name) uniquely identifies the object in an Amazon S3 bucket. For more information, see Creating object key names.

 

According to the logs provided we found multiple errors:

upload failed: <redacted file> 'utf-8' codec can't encode characters in position 92-94: surrogates not allowed
upload failed: <redacted file> 'utf-8' codec can't encode characters in position 12-13: surrogates not allowed

...

 

It would be nice if you:
- check and provide <redacted file> names to found root case
- try to provide AWS_CLI_FILE_ENCODING=UTF-8 as a pipe's variable

            - pipe: atlassian/aws-s3-deploy:1.0.0
              variables:
                S3_BUCKET: ${S3_BUCKET}/home
                LOCAL_PATH: 'home'
                ACL: 'public-read'
                EXTRA_ARGS: '--exclude=*.html'
                AWS_CLI_FILE_ENCODING: 'UTF-8'

 

Thank you for your contribution!

 

Best regards,
Oleksandr Kyrdan

Leonardo Medici July 1, 2021

@Oleksandr KyrdanI tried to run the pipeline with AWS_CLI_FILE_ENCODING variable without success.

I don't know if object key is an issues, I tried to manually run S3 sync command with

aws-cli/2.2.15 Python/3.9.5 Darwin/20.5.0 source/x86_64 prompt/off

And at least on my platform this was not a problem.

I can try to remove any strange character from file names but this will take some time.

Like Oleksandr Kyrdan likes this
0 votes
Oleksandr Kyrdan
Atlassian Team
Atlassian Team members are employees working across the company in a wide variety of roles.
July 8, 2021

Hi @Leonardo Medici

Some summary after the issue investigation,


Started from version 1.0.0 aws-s3-deploy pipe uses 
AWS CLI version 2.

So, now AWS CLI version 2 more strict for filenames/key-names: only utf-8 supported.

So, the official AWS S3 Creating object key names guide should be updated by AWS to the next:

Object key naming guidelines

You can must use any UTF-8 character in an object key name. However, using certain characters in key names can cause problems with some applications and protocols. The following guidelines help you maximize compliance with DNS, web-safe characters, XML parsers, and other APIs.

To solve your issue, try not to use non-utf-8 character in an object key name (filename).

According to your question, we updated Readme for the  pipe and released new version

- pipe: atlassian/aws-s3-deploy:1.0.1

 

Thank you for your contribution!

 

Cheers,
OIeksandr Kyrdan

Leonardo Medici July 8, 2021

@Oleksandr KyrdanI was looking some minutes ago at your repository and found the note in the latest commits.

I also found out that some of our pdf files have non utf-8 charaters, probably the root cause was a copy-paste from MS-Word that use non standard charaters for ' " - (at least in my native language)

 

Changed the characters and now it's working like a charm.

Thank you for your help

Like Oleksandr Kyrdan likes this

Suggest an answer

Log in or Sign up to answer
TAGS
AUG Leaders

Atlassian Community Events