Create
cancel
Showing results for 
Search instead for 
Did you mean: 
Sign up Log in
Celebration

Earn badges and make progress

You're on your way to the next level! Join the Kudos program to earn points and save your progress.

Deleted user Avatar
Deleted user

Level 1: Seed

25 / 150 points

Next: Root

Avatar

1 badge earned

Collect

Participate in fun challenges

Challenges come and go, but your rewards stay with you. Do more to earn more!

Challenges
Coins

Gift kudos to your peers

What goes around comes around! Share the love by gifting kudos to your peers.

Recognition
Ribbon

Rise up in the ranks

Keep earning points to reach the top of the leaderboard. It resets every quarter so you always have a chance!

Leaderboard

When aws lambda pipe is used twice in a pipeline second run produces error

Edited

I have a pipeline with 3 steps 2 of them using the pipe aws-lambda-deploy with update command.

The first instance of pipe works as expected.

The second instance of the pipe works (updates correctly in AWS) but it fails with error:

INFO: Update command succeeded.INFO: Writing results to file /opt/atlassian/pipelines/agent/build/.bitbucket/pipelines/generated/pipeline/pipes/aws-lambda-deploy-env./usr/bin/update-lambda.sh: line 22: /opt/atlassian/pipelines/agent/build/.bitbucket/pipelines/generated/pipeline/pipes/aws-lambda-deploy-env: Permission denied

 

I assume this is because of the artefact being created in prev step and something to do with permissions in artefacts.

 

Here is my bitbucket-pipelines.yml

 

 

image: node:8

pipelines:
default:
- step:
name: Run Tests
caches:
- node
script:
- cd ./lambda
- npm install
- npm test
- step:
name: Build lambda artefacts
caches:
- node
script:
- apt update
- apt install zip -y
- echo "Building lambda"
- ./ops/build.sh lambda
artifacts:
- "lambda.zip"
- step:
name: Deploy Lambda stage
trigger: automatic
deployment: staging
script:
- pipe: atlassian/aws-lambda-deploy:0.3.0
variables:
AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID_STAGE
AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY_STAGE
AWS_DEFAULT_REGION: $AWS_DEFAULT_REGION
FUNCTION_NAME: 'stage-lambda'
COMMAND: 'update'
ZIP_FILE: 'lambda.zip'
- step:
name: Deploy Lambda Production
trigger: manual
deployment: production
script:
- cat $BITBUCKET_PIPE_SHARED_STORAGE_DIR/aws-lambda-deploy-env
- rm $BITBUCKET_PIPE_SHARED_STORAGE_DIR/aws-lambda-deploy-env
- pipe: atlassian/aws-lambda-deploy:0.3.0
variables:
AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID_PRODUCTION
AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY_PRODUCTION
AWS_DEFAULT_REGION: $AWS_DEFAULT_REGION
FUNCTION_NAME: 'production-lambda'
COMMAND: 'update'
ZIP_FILE: 'lambda.zip'

 

 

 

3 answers

1 accepted

2 votes
Answer accepted
Alexander Zhukov
Atlassian Team
Atlassian Team members are employees working across the company in a wide variety of roles.
May 14, 2019

Hi Yavor. There is a currently unknown issue with the state sharing between pipes. You should roll back to using the 0.2.3 version of the aws-lambda-deploy pipe for the time being.

I have a workaround (delete state file between steps), just wanted to report it.

Like # people like this

Just for the sake of options, actually there is another solution.

To run those steps in parallel to it other like below.

 

- parallel:

  - step:

    # lambda deploy

  - step:

    # lambda deploy

 

This way the pipes will not share the same resources and this problem seems to not happen.

 

In your example...

- parallel:  # add this and put all the next steps inside it
- step:
name: Deploy Lambda stage
trigger: automatic
deployment: staging
script:
- pipe: atlassian/aws-lambda-deploy:0.3.0
variables:
AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID_STAGE
AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY_STAGE
AWS_DEFAULT_REGION: $AWS_DEFAULT_REGION
FUNCTION_NAME: 'stage-lambda'
COMMAND: 'update'
ZIP_FILE: 'lambda.zip'
- step:
name: Deploy Lambda Production
trigger: manual
deployment: production
script:
- cat $BITBUCKET_PIPE_SHARED_STORAGE_DIR/aws-lambda-deploy-env
- rm $BITBUCKET_PIPE_SHARED_STORAGE_DIR/aws-lambda-deploy-env
- pipe: atlassian/aws-lambda-deploy:0.3.0
variables:
AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID_PRODUCTION
AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY_PRODUCTION
AWS_DEFAULT_REGION: $AWS_DEFAULT_REGION
FUNCTION_NAME: 'production-lambda'
COMMAND: 'update'
ZIP_FILE: 'lambda.zip'

I am also facing the same issue with atlassian/aws-sam-deploy:0.2.3 

@Yavor Shahpasov  could you please share the details how we "delete state file between steps", if it is helpful

See full pipeline, some parts removed for readability. 

Note the rm command in the second step. The file aws-lambda-deploy-env was the one causing a problem in my case.

image: node:8

pipelines:
default:
- step:
name: Run Tests
script:
...
- step:
name: Build lambda artefacts
caches:
- node
script:
- apt update
- apt install zip -y
artifacts:
- "file.zip"
- step:
name: Deploy Lambda Stage
trigger: automatic
deployment: staging
script:
- pipe: atlassian/aws-lambda-deploy:0.3.0
variables:
AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID_STAGE
...
# The artefact causes the second run to fail.
- rm /opt/atlassian/pipelines/agent/build/.bitbucket/pipelines/generated/pipeline/pipes/aws-lambda-deploy-env
- step:
name: Deploy Lambda Production
trigger: manual
deployment: production
script:
- pipe: atlassian/aws-lambda-deploy:0.3.0
variables:
AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID_PRODUCTION
...

Like # people like this

It worked..!! Thanks a lot

It worked, thank you

Suggest an answer

Log in or Sign up to answer
TAGS
AUG Leaders

Atlassian Community Events