Kube Runner: Combination of ephemeral NS config and metadata.namespace in pipeline causes DB error

Versions:

I currently have the Kube Runner configured to provision and use ephemeral namespaces for each build, achieved with the following config:

- name: DRONE_NAMESPACE_DEFAULT
  value: "drone-"

I wanted to validate that this cannot be bypassed when specifying a namespace in the spec, for example:

kind: pipeline
type: kubernetes
name: default

metadata:
  namespace: default

steps:
- ...

Expected Behaviour (one of the following):

  • The pipeline step fails immediately, because the registered Kube Runner has namespace config specified and cannot be overridden.
  • The pipeline step continues but the metadata.namespace key is ignored.

Actual Behaviour:

  • The pipeline gets stuck in a “running” state, even though no steps are being executed
  • drone-runner-kube has logged an error: level=debug msg="stage failed" build.id=45 build.number=45 error="Error 1406: Data too long for column 'stage_error' at row 1" repo.id=5 repo.name=<omitted> repo.namespace=<omitted> stage.id=61 stage.name=default stage.number=1 thread=20
  • drone server has logged an error:
{
  "build.id": 45,
  "build.number": 45,
  "error": "Error 1406: Data too long for column 'stage_error' at row 1",
  "level": "warning",
  "msg": "manager: cannot update the stage",
  "repo.id": 5,
  "stage.id": 61,
  "time": "2020-05-26T14:01:48Z"
}

The stage_error field has a 500 char length, which I guess isn’t long enough for whatever error message it is trying to generate and store above. The error message isn’t logged, even with both DRONE_DEBUG and DRONE_TRACE configured. Due to this problem it leaves the build stuck in a “running” state and needs to be manually cancelled.

The default namespace is meant to be a default when no namespace is specified. We would not expect the default namespace to be enforced or to override a custom namespace in the yaml. With that being said, I think enforcing namespaces would be a valuable new feature. In fact, I believe this pull request might help with this particular use case.

This issue was patched a few days ago. It is not part of a tagged release, however, you can download the latest docker image to get this fix.

Thanks for the quick response Ash!

It looks like that pull request is exactly what I need to be able to move forward with this in Production. Will just need to validate a couple cases on the PR once it is complete:

  • Matching all Organisations is possible (*)
  • Specifying metadata.namespace: drone- in the rules file still sets the spec.PodSpec.Namespace at the right point to enable ephemeral namespace management

I’ll try out the latest drone server image build in regards to stuck “running” state bug. - Update: The message trim in latest tag worked, error message is passed up and build terminated as expected.