Weird issue when running pipeline on tags Edited

So I was running a pipeline targeting specific tag pattern (v*), and it was running fine until yesterday when I started getting weird errors:

The 'script' section in your bitbucket-pipelines.yml file must be a list of strings. Please refer to our documentation to correctly format the 'script' section.

Validate your bitbucket-pipelines.yml

View bitbucket-pipelines.yml

Now here's the funny thing - master builds just fine, and when I checkout the tag and then create a branch out of it, it builds fine too. It's only when the pipeline is running targeting a tag it says the yml is invalid. 

I've attached the yml below:

# build for $APPNAME main - downloads from s3 the artifacts then build all together
# also tries to save resources to s3 - while build
image: copyright/docker-node-php-npm-aws

pipelines:
custom:
deploy-to-uat:
- step:
script:
- export SET APPLICATION_NAME=$APPNAME

# uat requires master
- path="s3://$S3_BUCKET/$APPNAME-$LIB/master.tar.gz"
- aws s3 cp $path ./vendor.tar.gz
- tar -xzf ./vendor.tar.gz ./
- rm -f ./vendor.tar.gz
- mv ./vendor/composer.json ./composer.json
- composer dump-autoload

# Run the unit tests with bootstrap
- DATE=`date '+%Y-%m-%d %H:%M:%S'`
- TODAY=`date '+%Y-%m-%d'`
- mkdir -p ./coverage/$TODAY
- mv ./config/.test.env ./.env
- mv ./config/.test.manifest.json ./config/manifest.json
- phpunit ./tests/ --bootstrap ./app/bootstrap-unit-test.php --testdox-html "./coverage/$BITBUCKET_BRANCH/$TODAY/$DATE.html" --debug -vvv
- rm -f ./.env
- rm -f ./config/manifest.json

# upload results to build artifacts
- aws s3 sync ./coverage "s3://$S3_BUCKET/_build/$APPLICATION_NAME/coverage"

# Write the commit hash and branch names for reference
- echo "$BITBUCKET_BRANCH" >> .branch
- echo "$BITBUCKET_COMMIT" >> .revision

# install npm and upload to s3
- node --version
- npm install
# then build
- npm run build
# show me da money!
- less ./config/config.json
# ls out the files
- tree ./build/
# aws send over to s3
- AWS_ACCESS_KEY_ID=$AWS_CDN_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY=$AWS_CDN_SECRET_ACCESS_KEY aws s3 sync ./build/ s3://cdn.$CLSITE/$APPLICATION_NAME/
# also other stuff under web...
- AWS_ACCESS_KEY_ID=$AWS_CDN_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY=$AWS_CDN_SECRET_ACCESS_KEY aws s3 sync ./web/ s3://cdn.$CLSITE/$APPLICATION_NAME/ --exclude="*.svn/*" --exclude="css/*" --exclude="js/*" --exclude=".htaccess" --exclude="index.php"
# fonts
- AWS_ACCESS_KEY_ID=$AWS_CDN_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY=$AWS_CDN_SECRET_ACCESS_KEY aws s3 sync ./resources/fonts s3://cdn.$CLSITE/$APPLICATION_NAME/fonts --exclude="*.svn/*" --exclude="css/*" --exclude="js/*" --exclude=".htaccess" --exclude="index.php"

# delete evidence before zipping
- rm -rf ./build
- rm -rf ./coverage
- rm -rf ./node_modules

# roll the tar ball ready for deployment
- rm -rf .git
# rebuild UAT artifacts with UAT provision/env here - on master only
- AWS_ACCESS_KEY_ID=$AWS_UAT_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY=$AWS_UAT_SECRET_ACCESS_KEY aws s3 cp "s3://$SERVERCONF-dev/$APPNAME.codedeploy/appspec.yml" appspec.yml
- AWS_ACCESS_KEY_ID=$AWS_UAT_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY=$AWS_UAT_SECRET_ACCESS_KEY aws s3 cp "s3://$SERVERCONF-dev/$APPNAME.codedeploy/pre-deploy.sh" pre-deploy.sh
- AWS_ACCESS_KEY_ID=$AWS_UAT_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY=$AWS_UAT_SECRET_ACCESS_KEY aws s3 cp "s3://$SERVERCONF-dev/$APPNAME.codedeploy/post-deploy.sh" post-deploy.sh

# I HATE WINDOWS, you hear?
- dos2unix appspec.yml
- dos2unix pre-deploy.sh
- dos2unix post-deploy.sh

- tar -czvf /tmp/UAT.tar.gz ./
- AWS_ACCESS_KEY_ID=$AWS_UAT_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY=$AWS_UAT_SECRET_ACCESS_KEY aws s3 cp /tmp/UAT.tar.gz "s3://$UAT_S3_BUCKET/$APPLICATION_NAME/UAT.tar.gz"
# Post to slack that the branch has been pushed to UAT
- PAYLOAD="payload={\"text\":\"$APPLICATION_NAME:Branch <https://bitbucket.org/$BITBUCKET_REPO_OWNER/$BITBUCKET_REPO_SLUG/branch/$BITBUCKET_BRANCH|$BITBUCKET_BRANCH> is now on UAT\", \"icon_url\":\"https://a.slack-edge.com/12b5a/plugins/bitbucket/assets/service_36.png\"}"
- echo $PAYLOAD
- curl -X POST --data-urlencode "${PAYLOAD}" $SLACK_WEBHOOK_URL

tags:
v*:
- step:
script:
- export SET APPLICATION_NAME=$APPNAME

# path requires master $LIB
- path="s3://$S3_BUCKET/$APPNAME-$LIB/master.tar.gz"
- aws s3 cp $path ./vendor.tar.gz
- tar -xzf ./vendor.tar.gz ./
- rm -f ./vendor.tar.gz
- mv ./vendor/composer.json ./composer.json
- composer dump-autoload

# Run the unit tests with bootstrap
- DATE=`date '+%Y-%m-%d %H:%M:%S'`
- TODAY=`date '+%Y-%m-%d'`
- mkdir -p ./coverage/$TODAY
- mv ./config/.test.env ./.env
- mv ./config/.test.manifest.json ./config/manifest.json
# - phpunit ./tests/ --bootstrap ./app/bootstrap-unit-test.php --testdox-html "./coverage/$BITBUCKET_TAG/$TODAY/$DATE.html" --debug -vvv
- rm -f ./.env
- rm -f ./config/manifest.json

# upload results to build artifacts
- aws s3 sync ./coverage "s3://$S3_BUCKET/_build/$APPLICATION_NAME/coverage"

# Write the commit hash and branch names for reference
# - echo "$BITBUCKET_TAG" >> .branch
- echo "$BITBUCKET_COMMIT" >> .revision

# install npm and upload to s3
- node --version
- npm install
# then build
- npm run build
# show me da money!
- less ./config/config.json
# ls out the files
- tree ./build/
# aws send over to s3
- AWS_ACCESS_KEY_ID=$AWS_CDN_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY=$AWS_CDN_SECRET_ACCESS_KEY aws s3 sync ./build/ s3://cdn.$CLSITE/$APPLICATION_NAME/
# also other stuff under web...
- AWS_ACCESS_KEY_ID=$AWS_CDN_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY=$AWS_CDN_SECRET_ACCESS_KEY aws s3 sync ./web/ s3://cdn.$CLSITE/$APPLICATION_NAME/ --exclude="*.svn/*" --exclude="css/*" --exclude="js/*" --exclude=".htaccess" --exclude="index.php"
# fonts
- AWS_ACCESS_KEY_ID=$AWS_CDN_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY=$AWS_CDN_SECRET_ACCESS_KEY aws s3 sync ./resources/fonts s3://cdn.$CLSITE/$APPLICATION_NAME/fonts --exclude="*.svn/*" --exclude="css/*" --exclude="js/*" --exclude=".htaccess" --exclude="index.php"

# delete evidence before zipping
- rm -rf ./build
- rm -rf ./coverage
- rm -rf ./node_modules

# roll the tar ball ready for deployment
- rm -rf .git
# rebuild UAT artifacts with UAT provision/env here - on master only
- AWS_ACCESS_KEY_ID=$AWS_UAT_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY=$AWS_UAT_SECRET_ACCESS_KEY aws s3 cp "s3://$SERVERCONF/$APPNAME.codedeploy/appspec.yml" appspec.yml
- AWS_ACCESS_KEY_ID=$AWS_UAT_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY=$AWS_UAT_SECRET_ACCESS_KEY aws s3 cp "s3://$SERVERCONF/$APPNAME.codedeploy/pre-deploy.sh" pre-deploy.sh
- AWS_ACCESS_KEY_ID=$AWS_UAT_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY=$AWS_UAT_SECRET_ACCESS_KEY aws s3 cp "s3://$SERVERCONF/$APPNAME.codedeploy/post-deploy.sh" post-deploy.sh

# I HATE WINDOWS, you hear?
- dos2unix appspec.yml
- dos2unix pre-deploy.sh
- dos2unix post-deploy.sh

- tar -czvf /tmp/PRODUCTION.tar.gz ./
- aws s3 cp /tmp/PRODUCTION.tar.gz "s3://$S3_BUCKET/$APPLICATION_NAME/PRODUCTION.tar.gz"

# - AWS_ACCESS_KEY_ID=$AWS_UAT_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY=$AWS_UAT_SECRET_ACCESS_KEY aws s3 cp /tmp/PRODUCTION.tar.gz "s3://$UAT_S3_BUCKET/$APPLICATION_NAME/$BITBUCKET_TAG.tar.gz"
# Post to slack that the branch has been pushed to UAT
# - PAYLOAD="payload={\"text\":\"$APPLICATION_NAME:Branch <https://bitbucket.org/$BITBUCKET_REPO_OWNER/$BITBUCKET_REPO_SLUG/branch/$BITBUCKET_TAG|$BITBUCKET_TAG> is now on production & UAT\", \"icon_url\":\"https://a.slack-edge.com/12b5a/plugins/bitbucket/assets/service_36.png\"}"
- echo $PAYLOAD
- curl -X POST --data-urlencode "${PAYLOAD}" $SLACK_WEBHOOK_URL

# Post to sentry so they know we have a new release...
- curl https://sentry.io/api/hooks/release/builtin/244059/6f30ca184f16032f0515b70836268b06cb25b6b9bc9a188e321e2129ea1a13ad/ -X POST -H 'Content-Type: application/json' -d '{"version":"$BITBUCKET_TAG"}'

default:
- step:
script:
- export SET APPLICATION_NAME=$APPNAME

# download composer dependency by using s3 (yay!)
# check if current branch has vendor - if not, use master
- path="s3://$S3_BUCKET/$APPNAME-$LIB/$BITBUCKET_BRANCH.dev.tar.gz"
- count=`s3cmd ls $path | wc -l`
- 'if [[ $count -eq 0 ]]; then path="s3://$S3_BUCKET/$APPNAME-$LIB/master.tar.gz"; fi'
- echo "downloading from $path"
- aws s3 cp $path ./vendor.tar.gz
- tar -xzf ./vendor.tar.gz ./
- rm -f ./vendor.tar.gz
- tree ./vendor -L 1
- mv ./vendor/composer.json ./composer.json
- composer dump-autoload

# Run the unit tests with bootstrap
- DATE=`date '+%Y-%m-%d %H:%M:%S'`
- TODAY=`date '+%Y-%m-%d'`
- mkdir -p ./coverage/$BITBUCKET_BRANCH/$TODAY
- mv ./config/.test.env ./.env
- mv ./config/.test.manifest.json ./config/manifest.json
- phpunit ./tests/ --bootstrap ./app/bootstrap-unit-test.php --testdox-html "./coverage/$BITBUCKET_BRANCH/$TODAY/$DATE.html" --debug -vvv
- rm -f ./.env
- rm -f ./config/manifest.json

# Write the commit hash and branch names for reference
- echo "$BITBUCKET_BRANCH" >> .branch
- echo "$BITBUCKET_COMMIT" >> .revision

# install npm and upload to s3
- mkdir -p ./build/js
- mkdir -p ./build/css
- mkdir -p ./build/manifest
- node --version
- npm install
# then build
- npm run build
# ls out the files
- tree ./build/
- tree ./web/js
- tree ./web/css
# aws send over to s3
- AWS_ACCESS_KEY_ID=$AWS_CDN_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY=$AWS_CDN_SECRET_ACCESS_KEY aws s3 sync ./build/ s3://cdn.$CLSITE/$APPLICATION_NAME/
# other files too
- AWS_ACCESS_KEY_ID=$AWS_CDN_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY=$AWS_CDN_SECRET_ACCESS_KEY aws s3 sync ./web/ s3://cdn.$CLSITE/$APPLICATION_NAME/ --exclude="*.svn/*" --exclude="css/*" --exclude="js/*" --exclude=".htaccess" --exclude="index.php"
# fonts
- AWS_ACCESS_KEY_ID=$AWS_CDN_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY=$AWS_CDN_SECRET_ACCESS_KEY aws s3 sync ./resources/fonts s3://cdn.$CLSITE/$APPLICATION_NAME/fonts --exclude="*.svn/*" --exclude="css/*" --exclude="js/*" --exclude=".htaccess" --exclude="index.php"

# upload results to build artifacts
- AWS_ACCESS_KEY_ID=$AWS_UAT_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY=$AWS_UAT_SECRET_ACCESS_KEY aws s3 sync ./coverage "s3://$ARTIFACT-dev/_build/$APPLICATION_NAME/coverage"

# delete evidence before zipping
- rm -rf ./build
- rm -rf ./coverage
- rm -rf ./node_modules

# roll the tar ball ready for deployment
- rm -rf .git
- tar -czvf /tmp/artifact.tar.gz ./
- aws s3 cp /tmp/artifact.tar.gz "s3://$S3_BUCKET/$APPLICATION_NAME/$BITBUCKET_BRANCH.tar.gz"

# done
- echo "$BITBUCKET_BRANCH is now on s3"

 

1 answer

1 accepted

This widget could not be displayed.

anyways, I can't make it work so I instead created 3 .sh files and changed my yml to 

 

pipelines:
custom:
deploy-to-uat:
- step:
script:
- export SET APPLICATION_NAME=pedro
- cp ./bitbucket-builds/uat.sh ./build.sh
- dos2unix ./build.sh
- chmod +x ./build.sh
- ./build.sh

tags:
v*:
- step:
script:
- export SET APPLICATION_NAME=pedro
# download from s3 and run
- cp ./bitbucket-builds/tag.sh ./build.sh
- dos2unix ./build.sh
- chmod +x ./build.sh
- ./build.sh

default:
- step:
script:
- export SET APPLICATION_NAME=pedro
- cp ./bitbucket-builds/default.sh ./build.sh
- dos2unix ./build.sh
- chmod +x ./build.sh
- ./build.sh

Suggest an answer

Log in or Sign up to answer
Community showcase
Published Aug 21, 2018 in Bitbucket

Branch Management with Bitbucket

As a project manager, I have discovered that different developers want to bring their previous branching method with them when they join the team. Some developers are used to performing individual wo...

1,315 views 8 11
Read article

Atlassian User Groups

Connect with like-minded Atlassian users at free events near you!

Find a group

Connect with like-minded Atlassian users at free events near you!

Find my local user group

Unfortunately there are no AUG chapters near you at the moment.

Start an AUG

You're one step closer to meeting fellow Atlassian users at your local meet up. Learn more about AUGs

Groups near you