Valuable k8s feedback: had 2 major problems and one success with drone k8s on DigitalOcean

Hi @bradrydzewski, thank you for the drone project!

Today I experimented with setting up drone on digitalocean kubernetes engine. While I was able to set up the drone server on my subdomain, unfortunately I had little success with running any CI jobs and TLS setup. I spent quite a few hours debugging everything so here are the details and my feedback:

I started with following this guide: https://docs.drone.io/installation/github/kubernetes/

This page was super helpful in setting up the server, we can still improve it significantly.

1 - There should be a next button in the docs that takes a viewer to the drone ci configurations.

I’ve realized the local drone CLI setup is somewhat necessary to set everything up correctly and then I realized DRONE_USER_CREATE is the way to set it after googling everything. In some of the examples I found online didnt have the token so first it was also generating tokens for me thus there was a mismatch:

            - name: DRONE_USER_CREATE
              value: username:izelnakri,admin:true,token:xxxxxx

2- DRONE_USER_CREATE. This is necessary to set a predefined admin user also with the predefined token. Should be stressed in the docs.

3 - Then I couldnt set up the TLS/SSL correctly. I wanted to use the automatic Letsencrypt feature of drone, Im going to paste my k8s config below, perhaps Im not referencing the certs in the drone deployment ENVs correctly or it I might need to add/edit my DigitalOcean ingress? I’d appreciate if you could pinpoint the issue for me and if we can add this to docs.(My config uses k8 ingress + nginx controller that DO suggests in their tutorial since they dont support native k8s ingress directly out of the box).

4 - I was able to visit drone http server from my browser and sync my github repos! Then I couldnt run any jobs in activated repos. Which is the biggest problem I’m having now apart from the TLS issue. This is the log I have, not helping much. I also did drone repo repair izelnakri/izelnakri-blog it doesnt generate any logs on the pod or through drone log view izelnakri/izelnakri-blog, it only says successful but I see no builds happening.

 drone build last izelnakri/izelnakri-blog

-> this gives:

client error 404: {"message":"sql: no rows in result set"}

I guess theres a problem there.

drone log view izelnakri/izelnakri-blog

->

strconv.Atoi: parsing "": invalid syntax` 

This error message isnt helping I guess we should add it to docs and make the program generate better error message.

My repo info is:

drone repo info izelnakri/izelnakri-blog

->

Owner: izelnakri
Repo: izelnakri-blog
Config: .drone.yml
Visibility: private
Private: false
Trusted: true
Protected: false
Remote: https://github.com/izelnakri/izelnakri-blog.git

kubectl logs $mypod ->

{"level":"info","msg":"main: kubernetes scheduler enabled","time":"2019-08-05T09:14:51Z"}
{"acme":true,"host":"drone.izelnakri.com","level":"info","msg":"starting the http server","port":":443","proto":"https","time":"2019-08-05T09:14:51Z","url":"https://drone.izelnakri.com"}
{"interval":"30m0s","level":"info","msg":"starting the cron scheduler","time":"2019-08-05T09:14:51Z"}

5 - We need a clear k8s example repository with 1.x: After checking other examples online I initially thought maybe I need to run drone:agent along with the drone container, then I also saw with 1.x it should instead create native k8s Jobs per each build. Currently I couldnt get it to work although I did also create the ServiceAccount, I’m also not sure if its necessary

I’m pasting my k8s config here below, could you help me pinpoint what might be the problem for not running the drone builds/jobs and the TLS setup? In addition we can add all the fixes to documentation or to source code.

My k8s config:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: drone
  namespace: default
  labels:
    app: drone
spec:
  replicas: 1
  selector:
    matchLabels:
      app: drone
  template:
    metadata:
      labels:
        app: drone
    spec:
      containers:
        - name: drone
          image: "drone/drone:1.2.3"
          ports:
            - containerPort: 80
            - containerPort: 443
          env:
            - name: DRONE_KUBERNETES_ENABLED
              value: "true"
            - name: DRONE_KUBERNETES_NAMESPACE
              value: default
            - name: DRONE_GITHUB_SERVER
              value: https://github.com
            - name: DRONE_GITHUB_CLIENT_ID
              value: xxxx
            - name: DRONE_GITHUB_CLIENT_SECRET
              value: xxx
            - name: DRONE_RPC_SECRET
              value: xxx # I guess this one is working since I can use drone cli but this could also be the problem
            - name: DRONE_RPC_SERVER
              value: http://drone.izelnakri.com # do I need to expose an RPC port for this??
            - name: DRONE_SERVER_HOST
              value: drone.izelnakri.com
            - name: DRONE_SERVER_PROTO
              value: https
            - name: DRONE_DATABASE_DRIVER
              value: sqlite3
            - name: DRONE_DATABASE_DATASOURCE
              value: /data/database.sqlite
            - name: DRONE_TLS_AUTOCERT
              value: "true"
            - name: DRONE_RPC_DEBUG
              value: "true"
            - name: DRONE_USER_CREATE
              value: username:izelnakri,admin:true,token:xxxxx
          volumeMounts:
            - mountPath: /data
              name: drone
            - name: docker-socket
              mountPath: /var/run/docker.sock
      volumes:
        - name: drone
          persistentVolumeClaim:
            claimName: drone
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: drone
  namespace: default
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi
---
apiVersion: v1
kind: Service
metadata:
  name: drone
  namespace: default
spec:
  selector:
    app: drone
  ports:
  - protocol: TCP
    port: 443
    targetPort: 433
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: application-ingress
spec:
  rules:
  - host: drone.izelnakri.com
    http:
      paths:
      - backend:
          serviceName: drone
          servicePort: 443
  - host: izelnakri.com
    http:
      paths:
      - backend:
          serviceName: izelnakri-com-service
          servicePort: 443
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: izelnakri-com-deployment
  namespace: default
  labels:
    app: izelnakri-com
spec:
  replicas: 1
  selector:
    matchLabels:
      app: izelnakri-com
  template:
    metadata:
      labels:
        app: izelnakri-com
    spec:
      containers:
        - name: izelnakri-com
          image: hashicorp/http-echo
          args:
            - "-text=setting up k8"
          ports:
            - containerPort: 5678
---
apiVersion: v1
kind: Service
metadata:
  name: izelnakri-com-service
spec:
  ports:
  - port: 80
    targetPort: 5678
  selector:
    app: izelnakri-com
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: drone-rbac
subjects:
  - kind: ServiceAccount
    # Reference to upper's `metadata.name`
    name: default
    # Reference to upper's `metadata.namespace`
    namespace: default
roleRef:
  kind: ClusterRole
  name: cluster-admin
  apiGroup: rbac.authorization.k8s.io

My .drone.yml:

kind: pipeline
name: default

steps:
  - name: frontend
    image: node
    commands:
      - cat Dockerfile

I’m aware that k8s is experimental, I wanted to post here my feedback so it could be valuable for you and others. Thanks again for this ambitious project and for all your and other contributors efforts!

Hi again, this time I was able to solve both problems. Implemented the TLS at k8s Ingress level then I tried to make builds work with all sorts of tweaks. Finally I gave up and pushed force my branch to github and voila! drone started running and picked up my branch.

It was quite a surprise to me, there definitely needs to be more error messages on the UI when something is wrong in the server/config or in the .drone.yml, UI should tell there is a linting error. I’m very excited to be able to finally use drone :wink: Thanks again.

It was quite a surprise to me, there definitely needs to be more error messages on the UI when something is wrong in the server/config or in the .drone.yml

Drone does show errors in the UI when something is wrong with the Yaml (see Drone CI for an example).

Based on the information you provided above it sounds like networking was misconfigured and Drone was not receiving http webhooks from GitHub. The system cannot log anything if the http Request never reaches the Drone server.

If the server does receive the http Request, but no error is displayed, you can check the server logs. The system provides detailed debug logs. We also have a detailed guide to help troubleshoot why a build does not trigger. See ENV: get source branch

Thank you for quick response @bradrydzewski! The link you’ve shared is very helpful, could we link it or also write in the docs: https://docs.drone.io?

Also when i kubectl logs my-drone-pod I dont see the logs and I have DRONE_LOGS_DEBUG=true is there somewhere else I should look into for these messages?

This is what I get:

{"level":"info","msg":"main: kubernetes scheduler enabled","time":"2019-08-05T22:07:16Z"}
{"acme":true,"host":"drone.izelnakri.com","level":"info","msg":"starting the http server","port":":443","proto":"https","time":"2019-08-05T22:07:16Z","url":"https://drone.izelnakri.com"}
{"interval":"30m0s","level":"info","msg":"starting the cron scheduler","time":"2019-08-05T22:07:16Z"}

I think I’m experiencing the same problem you are, but with the slight difference that I’m sure there’s not a linting error in my .drone.yml, because I also configured the repo to notify the drone.io service and it works.

Please note that Kubernetes is experimental and is not something we are providing support for right now. I want to be 100% transparent that the current implementation will most likely be deprecated and replaced with something different. We received a lot of feedback and the current implementation has design flaws (per-pipeline namespace, etc) that will require us to rethink how such a runtime works. We may also end up adopting Tekton. Everyone is free to continue using the experimental Kubernetes runtime, however, you should set expectations accordingly.

1 Like