I’m running Drone (kubernetes-native) on AWS EKS, with an autoscaler running in the cluster. The hope is that when CPU utilization rises, the autoscaler will trigger new nodes to be added, and jobs will run on the new nodes. At first glance adding resources with CPU requests would get me what I need. However, some interrelated things seem to be thwarting me:
pipelines are created with node affinity. Drone ‘sticks’ the pipeline steps to the same node as their services, which I read as the same node as their ‘drone-job-*’ pod
job controllers are created without resources. The cluster doesn’t have the CPU requests up front, so it places these seemingly anywhere, even on CPU-starved nodes.
So I have autoscaling set up to recognize that a step requires more CPU than available, but it can’t scale up because it’s stuck on the same node due to node affinity (I think):
Scale-up predicate failed: GeneralPredicates predicate mismatch, cannot put [...] on [...], reason: node(s) didn't match node selector
Any guidance here is welcome. I’m going to continue to experiment but I’m running out of ideas.
Essentially, right now it feels like I want to get CPU requests on these drone-job-* entries:
I can confirm that this is exactly what is happening. I have a masivelly parallel build ( with depends_on statements ) and all the steps are started on the same node hammering it’s CPU. Please check the node column in the listing below:
Just a reminder that native Kubernetes runtime is still experimental and is not recommended for production use. It may be deprecated and replaced by Tekton in the future, so just be careful if relying on this for a production deployment. With that being said, we will accept patches that fix bugs with the current implementation.
Fair enough, and there are labels in multiple spots specifying that it’s experimental. It could be just us confused early adopters. The only other ask I would have from this thread would be a general update since the “drone goes k8s” blog post 6 months ago:
The Kubernetes Runtime is still considered experiment, however, initial testing has been very positive. There are some known issues and areas of improvement, however, I expect rapid progress over the coming weeks.
Has testing remained very positive? Do we still expect rapid progress? Would an update somewhere help, to show a change in priorities? An update like this could save people from wasting their experimental time.
Has testing remained very positive? Do we still expect rapid progress?
I do not believe so. The documentation has been updated to recommend against production use while we re-assess. We are tracking various issues related to Kubernetes where we have summarized our concerns, although no final decisions have been made. Some further reading:
Our current focus is on enabling custom stage definitions [1] and providing a runner framework (conceptually similar to Kubernetes operator framework). This will enable creation of custom runners, and will decouple runners from Drone core. I expect this will lead to a community-driven Kubernetes runtime that supersedes what we have today. I also expect the current Kubernetes runtime to remain active as a community-driven runtime, assuming there is interest in maintaining it despite its faults.