Hi there, I recently followed the examples in https://bitbucket.org/atlassian/trigger-pipeline/src/master/ to create a trigger from one pipeline, to call another.
The trigger successfully calls the second pipeline (and the first pipeline finishes), but then the second pipeline instantly fails with a 'Configuration Error':
Response Summary: HttpResponseSummary{httpStatusCode=500, httpStatusMessage=Internal Server Error, bodyAsString={"code":500,"message":"There was an error processing your request. It has been logged (ID f8fab38297b09cfc)."}} (command VARIABLE_SERVICE_CREATE_PIPELINE_VARIABLE, error key='unexpected.response.body')
The first bitbucket-pipelines.yml:
options:
docker: true
pipelines:
default:
- step:
name: Build and Push Docker Image
script:
- if [[ -z "$DOCKER_USER" ]]; then echo "DOCKER_USER is not set"; exit 1; fi
- if [[ -z "$DOCKER_PASSWORD" ]]; then echo "DOCKER_PASSWORD is not set"; exit 1; fi
- export IMG_NAME=$DOCKER_USER/pipeline-img-test:$BITBUCKET_COMMIT
- docker build -t $IMG_NAME .
- docker login -u $DOCKER_USER -p $DOCKER_PASSWORD
- docker push $IMG_NAME
- step:
name: Trigger deployment pipeline
script:
- if [[ -z "$BITBUCKET_USERNAME" ]]; then echo "BITBUCKET_USERNAME is not set"; exit 1; fi
- if [[ -z "$BITBUCKET_APP_PASSWORD" ]]; then echo "BITBUCKET_APP_PASSWORD is not set"; exit 1; fi
- pipe: atlassian/trigger-pipeline:4.3.1
variables:
BITBUCKET_USERNAME: "$BITBUCKET_USERNAME"
BITBUCKET_APP_PASSWORD: "$BITBUCKET_APP_PASSWORD"
REPOSITORY: "pipeline-deploy-test"
CUSTOM_PIPELINE_NAME: "deploy"
PIPELINE_VARIABLES: >
[{
"key": "IMG_NAME",
"value": "$IMG_NAME",
"secured": true
},
{
"key": "COMMIT_SHA",
"value": "$BITBUCKET_COMMIT",
"secured": true
}]
WAIT: "false"
The second bitbucket-pipelines.yml:
pipelines:
custom:
deploy:
# REQUIRED ENVIRONMENT VARIABLES:
# PIPELINES_AWS_ACCESS_KEY_ID
# PIPELINES_AWS_SECRET_ACCESS_KEY
# PRODUCTION_CLUSTER_1_NAME
# PRODUCTION_CLUSTER_1_REGION
# COMMIT_SHA
# IMG_NAME
- step:
name: Deploy to production cluster
script:
- if [[ -z "$PIPELINES_AWS_ACCESS_KEY_ID" ]]; then echo "PIPELINES_AWS_ACCESS_KEY_ID is not set"; exit 1; fi
- if [[ -z "$PIPELINES_AWS_SECRET_ACCESS_KEY" ]]; then echo "PIPELINES_AWS_SECRET_ACCESS_KEY is not set"; exit 1; fi
- if [[ -z "$PRODUCTION_CLUSTER_1_NAME" ]]; then echo "PRODUCTION_CLUSTER_1_NAME is not set"; exit 1; fi
- if [[ -z "$PRODUCTION_CLUSTER_1_REGION" ]]; then echo "PRODUCTION_CLUSTER_1_REGION is not set"; exit 1; fi
- if [[ -z "$COMMIT_SHA" ]]; then echo "COMMIT_SHA is not set"; exit 1; fi
- if [[ -z "$IMG_NAME" ]]; then echo "IMG_NAME is not set"; exit 1; fi
# setup AWS credentials
- export AWS_ACCESS_KEY_ID=${PIPELINES_AWS_ACCESS_KEY_ID}
- export AWS_SECRET_ACCESS_KEY=${PIPELINES_AWS_SECRET_ACCESS_KEY}
# setup kubeconfig. FUTURE: apply to all production clusters
- export PRODUCTION_CLUSTER_NAME=${PRODUCTION_CLUSTER_1_NAME}
- export PRODUCTION_CLUSTER_REGION=${PRODUCTION_CLUSTER_1_REGION}
- aws eks update-kubeconfig --name ${PRODUCTION_CLUSTER_NAME} --region ${PRODUCTION_CLUSTER_REGION} --kubeconfig ./kubeconfig
# substitute the config's fields & apply the config
- cat deployment-template.yaml | envsubst > deployment.yaml
- kubectl apply -f deployment.yaml --kubeconfig ./kubeconfig
Can anyone see any issues with my config, or is this an internal Atlassian issue?
Thank you for any help,
Tom
The problem was with my setup and the error message not helping. I didn't realise that environment variables are not persistent across steps, and I needed to add the line
- export IMG_NAME=$DOCKER_USER/pipeline-img-test:$BITBUCKET_COMMIT
into the step titled 'Trigger deployment pipeline'. This now works.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.