Hi! I’m using Drone on K8S and I realized that while jibs are running under runner-kube they don’t submit steps logs. All I see is “1” in the log section. What should I do in order to see logs?
We have not been able to reproduce issues with logging, and this does not appear to be impacting the majority of our users, although one other person did indicate they had logging issues. When we cannot reproduce we ask individuals to dive into the code to help identify the issue, and if possible, submit a patch to resolve. You can find some more details about how to test the runner in this thread:
I experience the same issue. It seems the runner is not collecting the logs, or not posting the logs to the controller. After collecting the log files, i reverted to the previous configuration and the collection of the logging works.
I expect the API call /rpc/v2/step/938/logs/upload containing the log lines, however the collection is not working. The only information passed on to the controller is “[{“pos”:0,“out”:”“,“time”:6}]” this is the same for all steps.
I’ve studied the debug article and inspected the runner source code, but i couldn’t find the part of the code that retrieves the log from the pod. I hope someone with knowledge of the runner can give us some guidelines on how to proceed, as i would really like to have the runner up and running
Below i’ve posted:
the drone pipeline
The output of the controller
Some output of the runner
I’ve created a .drone.yaml file with a few steps to test:
@Rick-Jongbloed the controller is used by the legacy kubernetes integration that was deprecated some time ago. People sometimes end up installing the deprecated kubernetes integration by using an unofficial installation method, like helm, which is not kept up to date and defaults to deprecated features.
I recommend installing the kubernetes runner using the official installation instructions, or using a forked helm chart that is being more actively maintained by some of our community members. https://github.com/HighwayofLife/helm-charts-drone
Thanks for your reply. I do not use Helm and used the official documentation to setup the runner. I created the deployment based on the official documentation.
sorry, I was thrown off by the term controller which has a special meaning, as it refers to the legacy implementation and the legacy drone/controller image. In the context of this thread, however, it looks like controller is referring to the server.
[test:1] + echo hello
[test:2] hello
[test:3] + sleep 5
[test:4] + echo world
[test:5] world
[test:6] + sleep 5
[test:7] + echo goodbye
[test:8] goodbye
[test:9] + sleep 5
[test:10] + echo world
[test:11] world
[test:12]
based on my testing I cannot reproduce any issues with logging. I wish I could provide more assistance, but I cannot fix what I cannot reproduce, so I will need to rely on those that can reproduce to dig into the code and publish their findings.
I have the problem as all the others. No errors are logged by drone server or kube runner.
Currently I suspect that the ARM64 image is somehow broken since @lnattrass showed metadata that points to ARM64 and I am running ARM64 as well. @bradrydzewski you tried to verify using amd64 I guess.
Logging is neither visible in the kube runner UI nor in the drone server pipeline view. The actual build pods are logging to stdout.
Should the output be visible in stdout of the kube-runner pod as well?
I am running kubernetes version 14.6 and am not seeing any issues with logging. Is it possible there is a regression in Kubernetes? or perhaps an issue with the specific distribution you are running?
@lnattrass thanks for taking the time to research and send a patch. I applied the patch and can confirm that everything is working as expected (everything was previously working for me, but this confirms there was no regression ). A new drone-runner-kube image is available so hopefully folks can pull the latest image and give it a try.