Following the issue I trying to implement the drone-runner-kube.
After deploying drone-runner kube using helm charts and removing the DRONE_KUBERNETES_ENABLED env variable from drone deployment the following issues are observed: the build pipeline with type kubernetes no longer functioning:
clone build step looks ok
each build step prints only - because the entrypoint [bash] in step definition
+bash
Slack plugins fails with error:
2020/02/29 09:43:52 Post : unsupported protocol scheme “”
Build pipeline with the type docker does not start remaining in pending state
It looks like the environment setting no longer works
It also seems that the volumes: not functioning as expected - no mounted volume
Using the same pipeline as mentioned in the issue above
Had to revert to DRONE_KUBERNETES_ENABLED=true
Please provide a copy of a yaml file that we can analyze so we can advise further.
Build pipeline with the type docker does not start remaining in pending state
When using the kubernetes runner you need type: kubernetes. If you install a docker runner you can use type: docker. If you use a type with no corresponding runner available the pipeline will sit in the queue in a pending state until one is available.
2020/02/29 09:43:52 Post : unsupported protocol scheme “”
This happens when the webhook url is not provided or when it is sourced from a secret that is not setup properly. See http://discuss.harness.io/t/problems-with-secrets/3286. If you are still having issues please provide the details requested in the Still Experiencing Issues section.
It looks like the environment setting no longer works
It also seems that the volumes : not functioning as expected
I just tested both and they are both working as expected. We will evaluate the sample yaml you provide and advise further.
This is the pipeline I’m testing
Looking at the output of df and env I do not see the volumes or custom env we are setting
When using DRONE_KUBERNETES_ENABLED=true everything works fine (no shared volumes of cause)
If the environment variables are not available it would imply they are not being fetched by the runner from your secret extension. Did you register the secret extension with your Kubernetes runner? See https://docs.drone.io/runner/extensions/kube/
Once the secret extension has been properly registered with the runner, you can temporarily enable trace logging for the runner. The runner provides detailed logging that can be used to better understand if / how it is getting secrets. For example, there are some sample log entries that one might see:
logger.Trace("secret: external: no matching secret")
logger.WithError(err).Debug("secret: external: cannot get secret")
logger.Trace("secret: external: secret is empty")
logger.Trace("secret: external: found matching secret")
I’m trying to use temporary volumes - why do I have to write this redundant lines - if they are not present it is safe to assume the volume is temporary.
volumes:
name: cache
temp: {}
The secrets are working well if I use the docker runner. I would suggest failure to get secret to be at least a warning in the log - not having to modify the log level to troubleshoot issues
I have deployed the drone-kube-runner using helm charts - do I need to do anything more?
I’m trying to use temporary volumes - why do I have to write this redundant lines - if they are not present it is safe to assume the volume is temporary.
This is inspired by kubernetes emptyDir syntax:
volumes:
- name: redis-storage
emptyDir: {}
The reason we do this is because a volume may be mounted into containers at different paths. This syntax supports this use case.
The secrets are working well if I use the docker runner.
The code responsible for getting secrets is a shared library used by both the docker and kubernetes runner. If this is not working with the kubernetes runner you can use the trace logs to triage further.
I would suggest failure to get secret to be at least a warning in the log - not having to modify the log level to troubleshoot issues
Agreed. we will adjust the log level from debug to warn when the system fails to fetch a secret from an external source.
After giving it a second thought I think that failure to retrieve secret should fail a build step with proper message stating what is wrong and how to fix it. Consider someone installing from helm charts and bumping into this issue next without a clue how to solve it.
Please consider eliminating the need to write boilerplate stuff like volumes: section for the temp volumes
After giving it a second thought I think that failure to retrieve secret should fail a build step with proper message stating what is wrong and how to fix it.
I definitely understand where you are coming from, unfortunately, this would be a breaking change for some projects. There are projects and pipeline steps that expect a secret to be empty for various reasons (for pull requests, etc) and they usually check for empty values in their build scripts. If we started automatically failing steps when a secret was not available we would break these pipelines.
We cannot introduce breaking changes to stable features in 1.x, however, we can and will consider breaking changes for 2.x. I agree that the current behavior causes confusion and hopefully can address in a future breaking release.
The proposal mentioned above does not seem to be very tempting.
Setting the DRONE_SECRET_PLUGIN_ENDPOINT and DRONE_SECRET_PLUGIN_TOKEN fixed the secrets issues. I think the helm chart for the drone-runner-kube needs to be improved to handle these variables or at least mention it.
Having able to configure the drone-runner-kube it brings me to the next task - migration of all the repositories .drone.yaml to add the type: kubernetes entry.
I have noticed that with docker runner it was not required to have this entry - is it possible to have some system-wide default on how to treat the case of unspecified type: ?
Is it possible to have the system-wide defaults for the build step resource allocation ?
Is there any way to share definition of steps between repositories ?