Hi,
I’m running into a conflict between drone-runner-kube and the vault-secret-webhook MutatingWebhook whereby I either have to change the webhook failpolicy, or I need to get drone-runner-kube to start runner pods in a different, custom, namespace. As I need the vault webhook my preferred solution is to configure drone-runner-kube, but it’s ignoring all my attempts to set it to use a different namespace…
I have a 1.19 EKS cluster with drone-server, drone-runner-kube and drone-vault-plugin deployed in the ‘drone’ namespace. I have also created an additional ‘drone-runners’ namespace that has a label set to disable the vault-secret-webhook for that namespace.
All 3 deployments have been deployed with the latest helm charts/images.
In the values yaml of the drone-runner-kube chart I set both the buildNamespaces and the DRONE_NAMESPACE_DEFAULT environment variable to ‘drone-runners’ expecting it to start using the drone-runners namespace, but it still tries to start the runner pods in the drone namespace.
Does anybody have any pointers how I can get it to use the custom namespace instead?
An alternative solution for me would be to get drone-runner-kube to start runner pods with a custom label, but the chart doesn’t seem to have any option for this. It does let me set the labels for the drone-runner-kube pods themselves, but not for the runner pods created when a pipeline is started.
I am able to use DRONE_NAMESPACE_DEFAULT to set the default namespace which is the recommended approach. The default namespace is used when no namespace is defined in the Drone yaml. Note that I am not using Helm and I do not have any experience with Helm.
Here is the relevant code in case you want to trace through and get a better understanding of how everything is working under the hood:
In the values yaml of the drone-runner-kube chart I set both the buildNamespaces and the DRONE_NAMESPACE_DEFAULT environment variable to ‘drone-runners’ expecting it to start using the drone-runners namespace, but it still tries to start the runner pods in the drone namespace.
The kubernetes runner uses “default” as the default namespace when creating resources. I also took a look at the helm chart, despite not really knowing much about helm, and I can see that the Helm chart uses “default” as the default namespace value:
If the kubernetes runner is using “drone” as the default namespace when creating resources it would tell me that overriding the default namespace is not being ignored (otherwise it would be using “default” as the default namespace). I would therefore recommend double checking your configuration or digging into the helm chart a bit further.
I think you’re spot on that the namespace is set in the .drone.yaml pipeline definition (I had completely missed that).
If you know a way to inject labels into the runner pods then I may try that, but otherwise I’ll just have to rename namespaces and I think that will also resolve my problem without having to change the pipeline definitions.