This is likely something simple that I am just missing. I am running drone in kubernetes along with the drone autoscaler. I can use the drone CLI to create an agent server (I’m using Digital Ocean) which seems to get setup correctly. However when I log into that agent server and look at the agent container’s logs I see a ton of:
INFO: 2018/04/17 04:29:34 transport: http2Client.notifyError got notified that the client transport was broken unexpected EOF.
With a few of these every now and then:
2018/04/17 04:29:31 grpc error: done(): code: Unavailable: rpc error: code = Unavailable desc = transport is closing
This is my current setup including everything I have checked… so hopefully this is just setup wrong and is a simple fix.
builder-drone pod exposes ports (http) 8000 and (grpc) 9000. This was setup using the helm chart for drone.
I have an ingress that maps from drone.my.domain.com:443 (I have a cert setup) to the builder-drone pod on port 8000 for the web UI and basic api.
I have an ingress that maps from agent.my.domain.com:80 to the builder-drone pod on port 9000.
The drone-autoscaler pod is setup to talk to the builder-drone pod on port 8000 using local kubernetes dns. It is also configured to hand the agent.my.domain.com:80 address to the DO agents that it starts up.
The kubernetes ingress/services that are setup to do this mapping are based on nginx.
Here are my thoughts/questions on what could be wrong
- Could nginx be messing up the grpc connection if its treating it like an http connection?
- Is the basic premise for this setup correct? As in if things are configured correctly above that should generally work?
- Is this setup just doomed to fail?