I have a deployment script that tries to push to AWS EKS. All the variables are provisioned and the build and push to ECS container repository works fine.
I use the following step in the pipeline:
name: Deploy to staging area (AWS Production cluster)
trigger: manual # Uncomment to make this a manual deployment.
- pipe: atlassian/aws-eks-kubectl-run:1.3.1
WITH_DEFAULT_LABELS: "False" # Optional
The script fails with the errors below.
✔ Successfully updated the kube config.
Traceback (most recent call last):
File "/pipe.py", line 42, in <module>
File "/usr/local/lib/python3.7/site-packages/kubectl_run/pipe.py", line 112, in run
File "/usr/local/lib/python3.7/site-packages/kubectl_run/pipe.py", line 77, in handle_apply
File "/usr/local/lib/python3.7/site-packages/kubectl_run/pipe.py", line 31, in update_labels_in_metadata
@Brad Vrabete check out our new version aws-eks-kubectl-run:1.4.0, passing:
- pipe: atlassian/aws-eks-kubectl-run:1.4.0
Looking forward to seeing the feedback from you.
Thanks for the helping us to improve!
It work with just a caveat that you might want to take a look at.
By default labels are applied to the objects created by the kubectl command. However if the branch name ends in '-' (and probably other non-alphanumeric character) these labels are not accepted by Kubernetes and the command fails. I had to disable the default labels (using WITH_DEFAULT_LABELS: "False") to go past that. Not a show stopper.
(The branch was automatically generated from Jira, in case you are wondering why would I use such a name; feature/ALAPP-15-deploy-an-instance-in-)
@Brad Vrabete hello!
Can you tell , if you use our pipe as a way to apply kustomization?
kubectly apply -k <kustomzation dir>
If yes, this file can have other format and we may also support this in the future.
@Brad Vrabete yes, docs are saying that `-k` flag is about kustomization feature and as I understand, this is absolutely different from what `-f` does.
Here I just want to understand your use case, what you exactly want to do with `apply` command, because as I see from the docs, it can be applied for different purposes.
I think we may support kustomization in future release and perhaps it will be in the same pipe. I will notify the changes will be enrolled.
Is there a list with all of these?
>> We support apply command separately and any other command also in this pipe. For that you just pass , for example
KUBECTL_COMMAND: 'autoscale' and put proper KUBECTL_ARGS (
check out our docs, section Variables - https://bitbucket.org/atlassian/aws-eks-kubectl-run/src/master/README.md )
@Brad Vrabete discovering kubectl object configuration files, I see metadata key in all of them . Ensure that you have valid configuration file here https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/
If anyway it turns out that metadata is not required here, we will update the pipe.
Or if there are some other yaml files, that are not related kubernetes configuration files,
you can move them to other path if you don't need them exactly in k8s/staging dir as a temporary workaround.
There can be such situation in your deployment that you have some project-specific yamls, but we should discover this case more why project-specific yamls would be in k8s deploy path.
Anyway here I think here can be the edgecase and you can help us to fix this in the furture release
Somehow I feel the problem is somewhere else.
I have replaced:
KUBECTL_COMMAND: "apply -k k8s/deploy/staging"
And the deployment worked without having to define anything else. Somehow the labels get affected if using KUBECTL_ARGS.
I'm also not sure the kubectl apply command arguments work as they should for -k (instead of the usual -f). Would RESOURCE_PATH be the folder parameter in this case?
@Brad Vrabete ah so then you define -k, but not -f.
The thing is that we don't support such mutual-exclusive flags (-k -f cannot be used together) and here can be the error.
This feature is in progress of gathering interest right now, thanks, you actually helped us to discover that it would be really nice to support such mutual-exclusive flags.
I will notify once we will fix this.
Answering here >>
Answering here >>
(There are .env files inside that folder; could that be the issue? )
Only *.yml are considered in your RESOURCE_PATH.
So your yamls can look like the following:
And this file is coming through validation only if you put KUBECTL_COMMAND as 'apply'.
Therefore your last execution works, because if KUBECTL_COMMAND is not equal to 'apply', but equal to 'apply -k <path>' , it is counted as a separate command in our pipe and your yaml file is not validated, so you go to kubectl deployment just with what you defined in your yaml. Then I think it would be useful to execute apply command as 'apply' exactly to have such k8s config validation.
Also we will discover that issue with wrong file parsing and not finding metadata, for that it would be nice to have not your private files, but only the structure of your yml(yaml) files that you have in resource path like below.
It will help us to define the root cause, I think there can be edgecase in files with some YAML structure.
Perhaps, your file contains some extra spaces or something like that and this is a bug in the pipe and we should fix this.
Hi Community! The submissions are in (and listed below) for the 🍻🍂Apptoberfest🍂🍻 Demo Competition and it’s time for you to place your votes for the best: Analytics & reporting app demo ...
Connect with like-minded Atlassian users at free events near you!Find an event
Connect with like-minded Atlassian users at free events near you!
Unfortunately there are no Community Events near you at the moment.Host an event
You're one step closer to meeting fellow Atlassian users at your local event. Learn more about Community Events