I have a deployment script that tries to push to AWS EKS. All the variables are provisioned and the build and push to ECS container repository works fine.
I use the following step in the pipeline:
```yaml
#[...]
- step:
name: Deploy to staging area (AWS Production cluster)
deployment: staging
trigger: manual # Uncomment to make this a manual deployment.
script:
- pipe: atlassian/aws-eks-kubectl-run:1.3.1
variables:
AWS_ACCESS_KEY_ID: "$AWS_PROD_ACCESS_KEY_ID"
AWS_SECRET_ACCESS_KEY: "$AWS_PROD_SECRET_ACCESS_KEY"
AWS_DEFAULT_REGION: "$AWS_PROD_DEFAULT_REGION"
CLUSTER_NAME: "prod"
KUBECTL_COMMAND: "apply"
KUBECTL_ARGS:
- "-k"
RESOURCE_PATH: "k8s/deploy/staging"
WITH_DEFAULT_LABELS: "False" # Optional
```
The script fails with the errors below.
```
✔ Successfully updated the kube config.
Traceback (most recent call last):
File "/pipe.py", line 42, in <module>
pipe.run()
File "/usr/local/lib/python3.7/site-packages/kubectl_run/pipe.py", line 112, in run
self.handle_apply()
File "/usr/local/lib/python3.7/site-packages/kubectl_run/pipe.py", line 77, in handle_apply
self.update_labels_in_metadata(template_file, labels)
File "/usr/local/lib/python3.7/site-packages/kubectl_run/pipe.py", line 31, in update_labels_in_metadata
yaml_doc['metadata'].setdefault('labels', {}).update(labels)
KeyError: 'metadata'
```
@Brad Vrabete check out our new version aws-eks-kubectl-run:1.4.0, passing:
- pipe: atlassian/aws-eks-kubectl-run:1.4.0
variables:
AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY
AWS_DEFAULT_REGION: $AWS_DEFAULT_REGION
CLUSTER_NAME: 'your-kube-cluster'
KUBECTL_COMMAND: 'apply'
RESOURCE_PATH: 'k8s/deploy/staging'
KUBECTL_APPLY_ARGS: '-k'
Looking forward to seeing the feedback from you.
Thanks for the helping us to improve!
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
It work with just a caveat that you might want to take a look at.
By default labels are applied to the objects created by the kubectl command. However if the branch name ends in '-' (and probably other non-alphanumeric character) these labels are not accepted by Kubernetes and the command fails. I had to disable the default labels (using WITH_DEFAULT_LABELS: "False") to go past that. Not a show stopper.
(The branch was automatically generated from Jira, in case you are wondering why would I use such a name; feature/ALAPP-15-deploy-an-instance-in-)
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
@Brad Vrabete thanks for pointing to this edgecase. We'll give it a try and see what we can do
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
@Brad Vrabete hello!
Can you tell , if you use our pipe as a way to apply kustomization?
Like
kubectly apply -k <kustomzation dir>
?
If yes, this file can have other format and we may also support this in the future.
See more in https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Hi @Halyna Berezovska Yes, I'm using your pipe
- pipe: atlassian/aws-eks-kubectl-run:1.3.1
with
KUBECTL_COMMAND: "apply -k <directory>"
This works fine
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Is there a list with all of these? Is there a pipe for kustomization command? (that is separate from kubectl apply -k)
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
@Brad Vrabete yes, docs are saying that `-k` flag is about kustomization feature and as I understand, this is absolutely different from what `-f` does.
Here I just want to understand your use case, what you exactly want to do with `apply` command, because as I see from the docs, it can be applied for different purposes.
P.S.
I think we may support kustomization in future release and perhaps it will be in the same pipe. I will notify the changes will be enrolled.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Also ,
Is there a list with all of these?
>> We support apply command separately and any other command also in this pipe. For that you just pass , for example
KUBECTL_COMMAND: 'autoscale' and put proper KUBECTL_ARGS (
check out our docs, section Variables - https://bitbucket.org/atlassian/aws-eks-kubectl-run/src/master/README.md )
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
@Brad Vrabete discovering kubectl object configuration files, I see metadata key in all of them . Ensure that you have valid configuration file here https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/
If anyway it turns out that metadata is not required here, we will update the pipe.
Or if there are some other yaml files, that are not related kubernetes configuration files,
you can move them to other path if you don't need them exactly in k8s/staging dir as a temporary workaround.
There can be such situation in your deployment that you have some project-specific yamls, but we should discover this case more why project-specific yamls would be in k8s deploy path.
Anyway here I think here can be the edgecase and you can help us to fix this in the furture release
Cheers, Galyna
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Hi Galyna,
Somehow I feel the problem is somewhere else.
I have replaced:
```yaml
KUBECTL_COMMAND: "apply"
KUBECTL_ARGS:
- "-k"
RESOURCE_PATH: "k8s/deploy/staging"
```
with
```yaml
KUBECTL_COMMAND: "apply -k k8s/deploy/staging"
```
And the deployment worked without having to define anything else. Somehow the labels get affected if using KUBECTL_ARGS.
I'm also not sure the kubectl apply command arguments work as they should for -k (instead of the usual -f). Would RESOURCE_PATH be the folder parameter in this case?
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
@Brad Vrabete ah so then you define -k, but not -f.
The thing is that we don't support such mutual-exclusive flags (-k -f cannot be used together) and here can be the error.
This feature is in progress of gathering interest right now, thanks, you actually helped us to discover that it would be really nice to support such mutual-exclusive flags.
I will notify once we will fix this.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
@Brad Vrabete also we will discover affecting -k in the end to this along with this I said.
Cheers, Galyna
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Thanks. Let me know if I can help (I'm fine to share the files privately)
(There are .env files inside that folder; could that be the issue? )
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
@Halyna Berezovska Indeed -k and -f are mutually exclusive. I am using just -k in this case but have been playing with KUBECTL_ARGS and RESOURCE_PATH. Good to know it will be fixed.
Thanks for your help,
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Answering here >>
Answering here >>
(There are .env files inside that folder; could that be the issue? )
Only *.yml are considered in your RESOURCE_PATH.
So your yamls can look like the following:
---
metadata:
name: redis
....
---
metadata:
name: mongo
....
And this file is coming through validation only if you put KUBECTL_COMMAND as 'apply'.
Therefore your last execution works, because if KUBECTL_COMMAND is not equal to 'apply', but equal to 'apply -k <path>' , it is counted as a separate command in our pipe and your yaml file is not validated, so you go to kubectl deployment just with what you defined in your yaml. Then I think it would be useful to execute apply command as 'apply' exactly to have such k8s config validation.
Also we will discover that issue with wrong file parsing and not finding metadata, for that it would be nice to have not your private files, but only the structure of your yml(yaml) files that you have in resource path like below.
---
metadata:
name: app
....
---
metadata:
name: mongo
field: value...
It will help us to define the root cause, I think there can be edgecase in files with some YAML structure.
Perhaps, your file contains some extra spaces or something like that and this is a bug in the pipe and we should fix this.
Regards, Galyna
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Online forums and learning are now in one easy-to-use experience.
By continuing, you accept the updated Community Terms of Use and acknowledge the Privacy Policy. Your public name, photo, and achievements may be publicly visible and available in search engines.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.