Contributing to Drone for Kubernetes

This guide is a work in progress. I will continue to update based on comments and questions from the community. Thanks for reading!

The Drone Kubernetes Runner takes a .drone.yml configuration and translates it into Kubernetes native pods, secrets, and services. So what does this mean if you are already running Drone on Kubernetes today? It means no more build agents. No more mounting the host machine Docker socket or running Docker in Docker. If you are into buzzwords, it means Drone is now Kubernetes Native.

The goal of this thread is to help you get up and running with the Drone development tools, so that you can test and contribute to the Drone Kubernetes Runner. If you want to contribute, you should use the command-line tools described below for testing. Compiling the full Drone server for testing purposes is not necessary, and quite frankly, will only make your life more difficult.

Source Code

The Drone Kubernetes Runner can be found here. You can clone the repository to your gopath and build using the instructions found here. The Kubernetes implementation can be found in this package. If you want to contribute to the Kubernetes Runner, this package (including any subdirectories) is the only code you should need to edit.

Creating a Pipeline

The Kubernetes Runner takes a yaml configuration file as input. The configuration file should define one or more pipelines of type Kubernetes.

Here is an example yaml:

kind: pipeline
type: kubernetes
name: default

clone:
  disable: true

steps:
- name: greeting
  image: alpine
  commands:
  - echo hello
  - echo world

Executing the Pipeline

In the previous section we created a simple Kubernetes pipeline configuration file. Now we can use the drone-runner-kube tool to execute the Pipeline. Install the drone-runner-kube binary to get started.

$ git clone https://github.com/drone-runners/drone-runner-kube
$ cd drone-runner-kube
$ go install

Once the binary is installed, execute the Pipeline with the below command. Note that this command should be executed in the same directory where the .drone.yml is stored. For initial testing, we recommend using the sample yaml file defined in the previous section.

$ drone-runner-kube exec --kubeconfig=$HOME/.kube/config

Testing Volumes

Every Drone pipeline has a shared volume, called the workspace, where your code is cloned. This ensures all steps have access to your source code, as well as any files or artifacts that are created. You can test this capability with the following yaml (compile the yaml and run).

kind: pipeline
type: kubernetes
name: default

clone:
  disable: true

steps:
- name: foo
  image: alpine
  commands:
  - touch hello.txt

- name: bar
  image: alpine
  commands:
  - ls hello.txt

Testing Services

If you define a services section in your yaml configuration it is launched in the same pod and is accessible on localhost. You can test this capability with the below yaml.

kind: pipeline
type: kubernetes
name: default

steps:
- name: test
  image: redis
  commands:
  - sleep 5
  - redis-cli -h localhost ping

services:
- name: redis
  image: redis

Testing Secrets

Secrets are mapped to Kubernetes secrets. You can test this capability with the below yaml configuration file. Note that you need to pass your secrets to the yaml in the compile step:

drone-runner-kube exec \
  --kubeconfig=$HOME/.kube/config \
  --secrets=username:janecitize,password:correct-horse-battery-staple
kind: pipeline
type: kubernetes
name: default

clone:
  disable: true

steps:
- name: test
  image: alpine
  environment:
    PASSWORD:
      from_secret: password
    USERNAME:
      from_secret: username    
  commands:
  - env

Debugging

You can use kubectl and the Kubernetes dashboard to debug a running pipeline. We also provide a simple utility that converts the configuration file to an approximate Kubernetes manifest. This can be useful when you want a better understanding of how Drone is creating Kubernetes resources. Please note It is NOT a usable manifest and is for debugging purposes only.

drone-runner-kube compile --spec

Issues and Bugs

You will notice the repository issue tracker is disabled. You should create topics for support, issues, ideas and general development in this discourse forum.

Please refrain from creating issues for this project in the core drone/drone repository. Issues created in the core Drone repository that are not directly related to the code in that repository will be closed.

How Can I Help?

I am glad you asked. This is just an initial implementation and there is plenty of room for improvement. If you have any questions you can post in this thread, create a new topic, or join our chatroom.

7 Likes

https://docs.drone.io/reference/server/ needs updating with the vars to enable the K8s integration

I am also curious as to how you think the situation of K8s running an older version of Docker which doesn’t support features like multi-stage docker builds should be handled?

I am also curious as to how you think the situation of K8s running an older version of Docker which doesn’t support features like multi-stage docker builds should be handled?

Drone does not directly handle docker build. This would be handled by a plugin. There are a few different plugins available for building Docker images, including plugins/docker and kubeciio/img (based on genuinetools/img). These plugins are docker-in-docker so the host machine Docker version does not matter.

1 Like

@bradrydzewski is there a way to bundle up my changes and deploy on a cluster?

Don’t get me wrong drone-runtime allows me to develop things and this document is super nice of you to make. But nothing seems to be enough for me :smiley: No seriously, it would help our adoption. Thanks.

You can also use the banzaicloud/drone-kaniko plugin, which can build multistage Dockerfile, but doesn’t use docker at all.

I can’t figure out how to limit Kubernetes jobs concurrency to 1. Currently Drone build jobs are running at the same time and it is crashing the server. How I can do this?

I believe the implementation keeps spinning up jobs without any consideration. At least I didn’t find/look for any knobs that would do that.

In my mind resource limits should be defined for each job (either explicit or default) as that is the mechanism in Kubernetes to keep the cluster healthy.

In this PR resource limits were implemented and I was able to use it successfully: add limits and requests to kubernetes by zetaab · Pull Request #29 · drone/drone-runtime · GitHub

In this issue I ask for a default limit, so the behavior you see should not happen even if you don’t define the limits: kubernetes: always set resource requests and limits · Issue #44 · drone/drone-runtime · GitHub

In this issue I describe that when I use limits the jobs don’t trigger autoscaling and often deadlock : kubernetes: persistent volumes · Issue #19 · drone/drone-runtime · GitHub

My conclusion is that without relying just a littlebit more on the Kubernetes scheduling (thus requiring volume support) the implementation is not ready yet.

1 Like

bradrydzewski, Regrading to the service, I found an issue not documented.
If your service name contains _, then in its hostname _ will be replaced by - if you run it in k8s.
For example, if your service name is redis_server, then to connect to it in steps, we need to use redis-server as the hostname.
This together with the “ports” stuff should really be documented. How can I contribute to the doc of Drone 1.0? Looks like the docs repo has been archived?

Is the latest version still support kubernetes? I can’t found document about kubernetes

the experimental kubernetes runtime was deprecated in april, but the functionality still exists to remain backward compatible.

we are working on a second iteration of a kubernetes runtime that builds on what we learned from the first attempt, but that won’t be available for a few weeks. Just started working on it a few days ago.

Was this released or the version on docs is the deprecated one?

the documentation is for the new runner.

Thank you, is the helm stable chart updated to use the new version?

Thank you, is the helm stable chart updated to use the new version?

We are not really involved with Helm, but it appears the chart is multiple versions behind and defaults to the deprecated Kubernetes runtime. A community member recently posted a link to their fork GitHub - HighwayofLife/helm-charts-drone: Helm Chart for Drone CI

Hi, will drone support running the steps in more than one pod? this will be very handy when the test require multiple services which might require resources exceeding one physical machine.

We made the explicit design decision to run everything in the same pod, similar to the approach taken by Tekton. We actually tried running pipeline steps in different pods in an early prototype, but this had a number of drawbacks and we ultimately decided against the approach after months of testing. Having given it months of consideration, we do not have any plans to revert back to a multi-pod design. With that being said, anyone can create their own custom runner if an existing runner does not meet their needs, and share with the broader community.