How to handle build failures due to i/o timeout

I have at least one build that fails every night due to i/o timeout. I see no DNS failures in Wireshark to docker and no storage bottlenecks. Can’t explain why this happens. Have spent hours trying to track this down, but have come up empty.

Trying to move to the next logical solution (at least in my mind) to the problem and automate kicking off the build again. I am always able to just manually kick it off and it works. I am not seeing any retry logic that can be added to .drone.yml. My next thought was to configure Drone to send a webhook with the console logs, parse it, and kick the build off again if the log contains i/o timeout. I am not seeing any way to send the console logs. My next thought was to write the console logs to disk and run a cron job to monitor them, but I am not seeing anyway to do that. I do not see timeout in the container logs for drone or runner.

I do see I can pull this from the logs table is the sqlite DB, but not sure I want to go down that rabbit hole. I can also see the failure via the drone cli with drone log view <repo/name> \<build> \<stage> \<step>. Not sure how I would leverage that in a build step to send a webhook with the right build, stage, and step on only an i/o timeout failure to a webhook to automate kicking off a new build.

My current workaround is to schedule 3 builds on each repo nightly to ensure it gets built every night.

Hoping someone has come up with a solution or there is something obvious I am just missing on how to address this in a better fashion.

Build Logs

Step 3/26 : FROM golang:${GO_VERSION}-${DEBIAN_VERSION}
Head "https://registry-1.docker.io/v2/library/golang/manifests/1.18.2-bullseye": Get "https://auth.docker.io/token?scope=repository%3Alibrary%2Fgolang%3Apull&service=registry.docker.io": dial tcp: lookup auth.docker.io on 127.0.0.11:53: read udp 127.0.0.1:43381->127.0.0.11:53: i/o timeout
exit status 1

Build step from .drone.yml

- name: build
  privileged: true
  image: plugins/docker
  settings:
    repo: {{ .input.localRepo }}
    dockerfile: {{ .input.Dockerfile }}
    registry:
      from_secret: registry
    insecure: true

Guessing either no one else is having this issue or has not found a workaround to the problem???