Using kubectl inside a build

I’m attempting to build out a simple CI/CD setup, within Drone 1.0 inside Kubernetes. Generally this is a lovely setup, but I have for the last while been battling with getting a call to kubectl to work, to actually deploy the build to our dev environment.

Can anyone recommend an approach to using kubectl inside a Drone build, running in k8s/GKE?

Thanks!

(my current issue is that if I run my kubectl container with kubectl run, it works as expected, but when run inside Drone it says The connection to the server localhost:8080 was refused - did you specify the right host or port? Do I really have to pass every connection envvar into kubectl? Why isn’t it picking them up when running in Drone?)

Please provide more information about how you installed Drone. Specifically are you using agents (https://docs.drone.io/installation/github/multi-machine/) or the native Kubernetes runtime (https://docs.drone.io/installation/github/kubernetes/)?

I am using Kubernetes native runtime (it is an excellent approach).

I’m wondering if there is a simple image/config pair that I can use to just run kubectl in a build step, rather than the approach I’ve been taking.

Assuming not, let me describe what I’ve been doing.

Firstly, here’s my envvars:

      DRONE_SERVER_HOST:                 build.example.com
      DRONE_SERVER_PROTO:                http
      DRONE_RPC_HOST:                    drone.build.svc.cluster.local
      DRONE_RPC_PROTO:                   http
      DRONE_KUBERNETES_ENABLED:          true
      DRONE_KUBERNETES_NAMESPACE:        build-jobs
      DRONE_KUBERNETES_SERVICE_ACCOUNT:  build-pipeline
      DRONE_LOGS_DEBUG:                  true
      DRONE_USER_CREATE:                 username:<my-username>,admin:true
      DRONE_GITLAB_CLIENT_ID:            XXXX
      DRONE_GITLAB_CLIENT_SECRET:        XXXX
      DRONE_RPC_SECRET:                  XXXX

The image I am using is really nothing more than a wrapper around kubectl to allow me to run it inside a build.

When I run a pod manually (through kubectl run), I see a /run/secrets/kubernetes.io/serviceaccount/token file. However, when a job runs in Drone, that file is not present. I presume that will be needed for a job to be able to connect to the Kuberenetes API.

Also, having set the DRONE_KUBERNETES_SERVICE_ACCOUNT, that has been applied to the ‘job controller’ pods. However, the builds themselves, which fire up in randomised namespaces such as e5vmbwh2nezkfk6dt2vsclbagupaf0wy have a service account set to default.

I’m not sure if either of these are a problem.

Let me know if there are any other details I can provide.

We had the same problem and decided to write a plugin for that.
Try out our drone plugin https://hub.docker.com/r/sinlead/drone-kubectl

Thanks, that is very helpful. Something weird going on with Drone truncating the command (by one character). Had to fork your plugin to be able to track it down :frowning: Will post on a separate thread about that.

@malcolmholmes Thank you for this post, I didn’t found any doc about those KUBERNETES env vars and I’m stuck on secrets is forbidden: User "system:serviceaccount:palight-dev:default" cannot create resource "secrets" in API group "" in the namespace "default" although the pods are not in default namespace as well as the service account and it role bindings.

There is no reference in the docs but at least there is

I’m wondering why for instance the namespace value is not set from /var/run/secrets/kubernetes.io/serviceaccount/namespace

Also setting those envvars did not solved my issue. Could this be rather related to drone-runners/drone-runner-kube ?

As far I have gone with my investigation I cannot see how I could use the kubernetes runner outside namespace “default” as I didn’t figured out yet how I could have a role in “my-namespace”, a role binding to the default service account - for instance - and I need to create/delete secrets etc outside of “my-namespace”.

Again, setting DRONE_KUBERNETES_… environment variables does not change the behavior of the runner.

you can set the namespace in the yaml
https://docs.drone.io/pipeline/kubernetes/syntax/metadata/

or you can set the default namespace globally:
https://docs.drone.io/runner/kubernetes/configuration/reference/drone-namespace-default/

1 Like

Well, thank you for those references @bradrydzewski ! I feel I’ve got to dig the docs harder

The second solution fits my needs. So thank you again! This is a releaf.