According to Bitbucket, my bitbucket-pipelines.yml file is invalid due to deployment environment 'test' being invalid. Screenshot below:
I checked the validator and it said that my code is valid, so I'm confused as to what I am doing wrong.
My .yml file is as follows:
- step: &step-test
- export GOPATH="$HOME/go"
- export PATH="$PATH:$GOPATH/bin"
- go get -u golang.org/x/lint/golint
- go mod vendor
- golint -set_exit_status $(go list ./... | grep -v /vendor/)
# Uncomment these test steps when there are test file available
# - go test -short $(go list ./... | grep -v /vendor/)
# - go test -race -short $(go list ./... | grep -v /vendor/)
- step: &step-build
- go build
- step: &step-deploy
# set GCLOUD_PROJECT environment variable to your project ID
# set GCLOUD_API_KEYFILE environment variable to keyfile as described here (but we aren't base64-encoding the file): https://confluence.atlassian.com/x/dm2xNQ
name: Deploy to GCloud
# Set up credentials
- echo $GCLOUD_API_KEYFILE > ./gcloud-api-key.json
- gcloud auth activate-service-account --key-file gcloud-api-key.json
# Deploy app (and cron if it exists).
- gcloud config set project $GCLOUD_PROJECT
- gcloud --quiet --project $GCLOUD_PROJECT app deploy app.yaml
- if [ -f cron.yaml ]; then gcloud --quiet --project $GCLOUD_PROJECT app deploy cron.yaml; fi
- step: *step-test
- step: *step-build
- step: *step-test
- step: *step-build
- step: *step-deploy
Any help would be greatly appreciated, thank you!
EDIT: I found out that it is my "step-build" that is failing.
EDIT 2: I was able to have it run by combining both my step-test and step-build stages to one step. Are we not allowed to have multiple steps with the same deployment value?
According to the documentation:
Currently Bitbucket Deployments supports deploying to
productiontype environments and whichever you use they must be listed in this order in each pipeline.
This isn't very well phrased, but what it means is that only one step can deploy to any given environment, and that the steps must refer to those environments in that order.
Running into same issue, currently using staging to upload projects to S3. We then wanted a development deployment (I was trying to just re-use staging) and a production environment.
Obviously a development deployment doesn't exist as well as the staging deployment, had to rewrite pipelines to fit to using staging and production in that order. I considered using test too but my workflow would have ended up being staging, test, production which also broke pipelines.
Bottom line, you can only use 1 of each and they have to be in that order, I totally don't agree with this. We should be able to add our own deployments and control the order they can be placed rather than being forced to use test, staging & production.
The one Bitbucket deployment variable I could find that should work in various consecutive steps is `BITBUCKET_DEPLOYMENT_ENVIRONMENT` (and `BITBUCKET_DEPLOYMENT_ENVIRONMENT_UUID`).
No idea why these deployment variables should not be able to use on consecutive deployment steps.
Please share the name of the variables which are missing.
@T. Klingenberg Not talking about the bitbucket provided variables those are injected on every step, we're speaking of variables associated to "Deployments" that are defined by users.
Right now you can create variables for specific deployments, this helps reduce code duplication since you can redefine what the variable is per deployment. However, this breaks down completely and all efficiencies are lost if you need to run another step after a deployment since those deployment defined variables are now gone.
It would be nice to have the ability to do something similar to parallel steps where we could define multiple steps within a deployment phase preserving all the user defined variables for that deployment, or like mentioned above, allow for tagging multiple steps as a deployment type so we can reuse those variables in all associated steps.
Yeah, thought about saving to .env files but we usually have protected vars as well so would rather keep that exposure very limited, especially considering they would now be stored in a downloadable file.
Hoping that the Bitbucket devs can workout a flow for this, it would reduce a lot of redundancies on my end :)
@Michael Russell You can always encrypt that with an encryption key for all pipelines if you need this on the same level as "secure" variables. There should be also existing utilities for that, unfortunately I don't have a link at hand.
IMHO it kind of makes sense to only have one step for a deployment. Not suggesting this is it but there is also some value in my eyes to not have the deployment triggering too complicated. Perhaps use a deployment image of your own that ships with all needed utilities for your deployment steps.
I completely agree with:
there is also some value in my eyes to not have the deployment triggering too complicated
Which is why I'd like to see an option of either allowing multiple steps be associated to the same deployment (shares the env vars and the locked down user triggers, which we use in our pro plan) or away to nest steps under a deployment umbrella that has the same effect.
All the solutions at present time can over complicate this. Instead of having to reduce to more complicated methods such as setting up a key vault to encrypt env vars/values to artifacts and then decrypt them upon next steps or building out custom images for everything and embedding all your API keys/tokens/passwords; I see value in multi-step deployments along with single step deployments that are very simple.
I'd rather not make one super complicated step that does it all, but, prefer to allow for multiple, repeatable, testable steps that can build upon each other and be moved around, along with making the yml very human readable and predictable.
Perhaps use a deployment image of your own that ships with all needed utilities for your deployment steps.
We've built out lots of custom images that we use for deployments and we've built out Pipes that run tests for each action on each update we make to them, with proper release cycles; However, even with all this we still can't lock down env variables to specific deployments since lots of these deployments happen across multiple steps for most of our projects. The key for Pipes/custom images to be flexible/secure/testable is to not embed API keys/tokens/etc within those images but allow those images to rely on passing those items into them at execution, which, brings me back to how it'd be nice to have a way to associate multiple steps for deployments along with the simple one step deployment :)
I have not yet finished the pipes feature in my local pipelines runner, otherwise it should be possible to run a single step that runs other pipelines. As the environment is shared across all these (IIRC), this should then work. Brings me to the idea to create a pipe for running pipelines.
I find it a bit weird that I cannot repeat the `deployment` variable in my step.
I had the following definitions:
- step: &build
caches: - node
- npm ci
- npm run aws:config
- npm run build
- npm run export
- step: &deploy
caches: - node
- npm run deploy
and then in my pipeline I had:
name: Build for development
name: Deploy to development
This would trigger the error in :
The deployment environment 'development' in your bitbucket-pipelines.yml file occurs multiple times in the pipeline. Please refer to our documentation for valid environments and their ordering.
But I need the environment in my Build step to access the AWS services for the aws:config, And I need the deployment in my deploy step so that the AWS CDK knows where to deploy too (secret and access keys set in bitbucket deployment environment variables).
So I find this a bit weird, I now have to create a build and a build-deploy step which is kinda defeating creating separate definitions.
@Igor Ludgero Miura Sure, I was not aware of these specific deployment variables. This should be available, at least this could be seen as very unexpected to have them *not* in further steps if the deployment pipeline allows to have multiple steps so to say. I hope Atlassian support is able to give this some traction. This is perhaps an easy, quick and safe improvement.
I'm in the same boat here. I need to be able to use 2 different image types across 3 steps for each of test, staging, and production. I can either make my own image, or always install the AWS cli on every deployment, neither of which is ideal.
This seems so arbitrary.
I'm facing the same issue. My requirement is as following.
For Production environment deployment, I've bifurcated pipeline into 2 step: First step check and show files to be deployed and then second step actually deploys those files. This is separated into two because I''ve set set step to manuall. So, after confirmation of files-to-be-deployed, I run second step manually.
But this pipeline limitation is now allowing me to do so as deployment command couldn't be repeated as per their error.
So, any update regarding this bitbucket pipeline limitation? Or anyone can help me out to do same with some other way?
Beginning on April 4th, we will be implementing push limits. This means that your push cannot be completed if it is over 3.5 GB. If you do attempt to complete a push that is over 3.5 GB, it will fail...
Connect with like-minded Atlassian users at free events near you!Find an event
Connect with like-minded Atlassian users at free events near you!
Unfortunately there are no Community Events near you at the moment.Host an event
You're one step closer to meeting fellow Atlassian users at your local event. Learn more about Community Events