Docker-drone auth issues: Kubernetes, 1.0.0-rc4

Outline

Since upgrading our Drone installation from 0.8 to 1.0.0-rc4 we are experiencing authentication issues with the drone-docker plugin.

Setting up docker credential secrets via the Drone interface for a specific repo: DOCKER_USERNAME and DOCKER_PASSWORD the plugin returns:

+ /usr/local/bin/dockerd -g /var/lib/docker
time="2019-01-16T01:04:01Z" level=fatal msg="Error authenticating: exit status 1"

Setup

Drone: v1.0.0-rc4
Kubernetes:

  • GKE node version v1.11.5-gke.5
  • docker v7.3.2
  • Vanilla kubernetes on GKE

See our configs here https://gist.github.com/andrewmclagan/cb276479ec4f0afced163106eaae8afa

Steps

See our pipeline steps here https://gist.github.com/andrewmclagan/cb276479ec4f0afced163106eaae8afa

  1. Login

In step one we attempt to login via a vanilla docker container using secret variables as would be injected into the drone-docker plugin. This works with the output:

+ docker login -u $PLUGIN_USERNAME -p $PLUGIN_PASSWORD
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
  1. Publish

The second step attempts to build the image and publish it using the drone-docker plugin. This uses the exact same secrets as above that work, although this fails with:

+ /usr/local/bin/dockerd -g /var/lib/docker
time="2019-01-16T01:04:01Z" level=fatal msg="Error authenticating: exit status 1"

This could be related to issue #31. Drone mounts a config map to root to store the netrc file, which mounts a read-only directory to root. The Docker credentials are written to /root/.docker/config.json, which probably fails, and is the root cause of the login error.

I have a fix planned for issue #31 which I documented here. The fix is a little involved and requires a good amount of regression testing just to make sure that I donā€™t break anything.

1 Like

Agreed. Looked through that solution - seems allot better than mounting to /root.

as a temporary solution you can override the $HOME env variable to switch to another storage location for the .docker files.

I used the following pipeline config to archive that:

- name: container-build-push
  image: plugins/docker
  privileged: true
  settings:
    repo: eu.gcr.io/test
    registry: eu.gcr.io
    tag: ${DRONE_BUILD_NUMBER}   
    password:
      from_secret: google_credentials
    username: _json_key
    debug: true
  environment:
    HOME: "/tmp"

Can confirm this works, although breaks drone exec CLI

I wanted to provide an update:

  1. I have a fix for this locally and will publish early next week
  2. It will be included in the rc.5 release, planned for end of next week
  3. In the meantime please use the workaround described above
1 Like

Very much appreciated!

This should be patched now. Instead of mounting the .netrc as a config map we are injecting as environment variables [1]. I have a more permanent fix planned, but for now this should solve the problem.

I will close this thread once I have confirmation that the fix is working for a few of you.

[1] https://github.com/drone/drone-yaml/commit/1e125e10a9c8c290de8ca6a05428f38ec8be97db

1 Like

Ok i will boot up the image, look through the commit and report back here :slight_smile:

It works for us on GKE!

excellent, thanks for testing it out and reporting back!

In what version is it working? Iā€™ve tested it with :1 and :1.8 but no success.
The docker running in the host machine (GCE) can connect fine to the registry.

My logs are the following:

Digest: sha256:014a753cb3c1178df355a6ce97c4bf1d1860802f41ed5ae07493ff8a74660d0f
3 Status: Image is up to date for plugins/docker:latest
4 + /usr/local/bin/dockerd --data-root /var/lib/docker --host=unix:///var/run/docker.sock
5 time=ā€œ2020-06-23T23:14:39.585067816Zā€ level=info msg=ā€œStarting upā€
6 time=ā€œ2020-06-23T23:14:39.587882448Zā€ level=warning msg=ā€œcould not change group /var/run/docker.sock to docker: group docker not foundā€
7 time=ā€œ2020-06-23T23:14:39.589461087Zā€ level=info msg=ā€œlibcontainerd: started new containerd processā€ pid=31
8 time=ā€œ2020-06-23T23:14:39.589658999Zā€ level=info msg=ā€œparsed scheme: "unix"ā€ module=grpc
9 time=ā€œ2020-06-23T23:14:39.589713699Zā€ level=info msg=ā€œscheme "unix" not registered, fallback to default schemeā€ module=grpc
10 time=ā€œ2020-06-23T23:14:39.589982435Zā€ level=info msg=ā€œccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0 }] }ā€ module=grpc
11 time=ā€œ2020-06-23T23:14:39.590215757Zā€ level=info msg=ā€œClientConn switching balancer to "pick_first"ā€ module=grpc
12 time=ā€œ2020-06-23T23:14:39.612165470Zā€ level=info msg=ā€œstarting containerdā€ revision=7ad184331fa3e55e52b890ea95e65ba581ae3429 version=v1.2.13
13 time=ā€œ2020-06-23T23:14:39.612620800Zā€ level=info msg=ā€œloading plugin ā€œio.containerd.content.v1.contentā€ā€¦ā€ type=io.containerd.content.v1
14 time=ā€œ2020-06-23T23:14:39.613134435Zā€ level=info msg=ā€œloading plugin ā€œio.containerd.snapshotter.v1.btrfsā€ā€¦ā€ type=io.containerd.snapshotter.v1
15 time=ā€œ2020-06-23T23:14:39.615147019Zā€ level=warning msg=ā€œfailed to load plugin io.containerd.snapshotter.v1.btrfsā€ error=ā€œpath /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotterā€
16 time=ā€œ2020-06-23T23:14:39.615223226Zā€ level=info msg=ā€œloading plugin ā€œio.containerd.snapshotter.v1.aufsā€ā€¦ā€ type=io.containerd.snapshotter.v1
17 time=ā€œ2020-06-23T23:14:39.620358648Zā€ level=warning msg=ā€œfailed to load plugin io.containerd.snapshotter.v1.aufsā€ error=ā€œmodprobe aufs failed: ā€œip: canā€™t find device ā€˜aufsā€™\naufs 258048 0 \nmodprobe: canā€™t change directory to ā€˜/lib/modulesā€™: No such file or directory\nā€: exit status 1ā€
18 time=ā€œ2020-06-23T23:14:39.620384661Zā€ level=info msg=ā€œloading plugin ā€œio.containerd.snapshotter.v1.nativeā€ā€¦ā€ type=io.containerd.snapshotter.v1
19 time=ā€œ2020-06-23T23:14:39.620574898Zā€ level=info msg=ā€œloading plugin ā€œio.containerd.snapshotter.v1.overlayfsā€ā€¦ā€ type=io.containerd.snapshotter.v1
20 time=ā€œ2020-06-23T23:14:39.620820415Zā€ level=info msg=ā€œloading plugin ā€œio.containerd.snapshotter.v1.zfsā€ā€¦ā€ type=io.containerd.snapshotter.v1
21 time=ā€œ2020-06-23T23:14:39.621150228Zā€ level=info msg=ā€œskip loading plugin ā€œio.containerd.snapshotter.v1.zfsā€ā€¦ā€ type=io.containerd.snapshotter.v1
22 time=ā€œ2020-06-23T23:14:39.621171863Zā€ level=info msg=ā€œloading plugin ā€œio.containerd.metadata.v1.boltā€ā€¦ā€ type=io.containerd.metadata.v1
23 time=ā€œ2020-06-23T23:14:39.621232310Zā€ level=warning msg=ā€œcould not use snapshotter btrfs in metadata pluginā€ error=ā€œpath /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotterā€
24 time=ā€œ2020-06-23T23:14:39.621241992Zā€ level=warning msg=ā€œcould not use snapshotter aufs in metadata pluginā€ error=ā€œmodprobe aufs failed: ā€œip: canā€™t find device ā€˜aufsā€™\naufs 258048 0 \nmodprobe: canā€™t change directory to ā€˜/lib/modulesā€™: No such file or directory\nā€: exit status 1ā€
25 time=ā€œ2020-06-23T23:14:39.621252518Zā€ level=warning msg=ā€œcould not use snapshotter zfs in metadata pluginā€ error=ā€œpath /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip pluginā€
26 time=ā€œ2020-06-23T23:14:39.628938216Zā€ level=info msg=ā€œloading plugin ā€œio.containerd.differ.v1.walkingā€ā€¦ā€ type=io.containerd.differ.v1
27 time=ā€œ2020-06-23T23:14:39.628978564Zā€ level=info msg=ā€œloading plugin ā€œio.containerd.gc.v1.schedulerā€ā€¦ā€ type=io.containerd.gc.v1
28 time=ā€œ2020-06-23T23:14:39.629023113Zā€ level=info msg=ā€œloading plugin ā€œio.containerd.service.v1.containers-serviceā€ā€¦ā€ type=io.containerd.service.v1
29 time=ā€œ2020-06-23T23:14:39.629040927Zā€ level=info msg=ā€œloading plugin ā€œio.containerd.service.v1.content-serviceā€ā€¦ā€ type=io.containerd.service.v1
30 time=ā€œ2020-06-23T23:14:39.629054801Zā€ level=info msg=ā€œloading plugin ā€œio.containerd.service.v1.diff-serviceā€ā€¦ā€ type=io.containerd.service.v1
31 time=ā€œ2020-06-23T23:14:39.629069094Zā€ level=info msg=ā€œloading plugin ā€œio.containerd.service.v1.images-serviceā€ā€¦ā€ type=io.containerd.service.v1
32 time=ā€œ2020-06-23T23:14:39.629084538Zā€ level=info msg=ā€œloading plugin ā€œio.containerd.service.v1.leases-serviceā€ā€¦ā€ type=io.containerd.service.v1
33 time=ā€œ2020-06-23T23:14:39.629120801Zā€ level=info msg=ā€œloading plugin ā€œio.containerd.service.v1.namespaces-serviceā€ā€¦ā€ type=io.containerd.service.v1
34 time=ā€œ2020-06-23T23:14:39.629136616Zā€ level=info msg=ā€œloading plugin ā€œio.containerd.service.v1.snapshots-serviceā€ā€¦ā€ type=io.containerd.service.v1
35 time=ā€œ2020-06-23T23:14:39.629155271Zā€ level=info msg=ā€œloading plugin ā€œio.containerd.runtime.v1.linuxā€ā€¦ā€ type=io.containerd.runtime.v1
36 time=ā€œ2020-06-23T23:14:39.629384797Zā€ level=info msg=ā€œloading plugin ā€œio.containerd.runtime.v2.taskā€ā€¦ā€ type=io.containerd.runtime.v2
37 time=ā€œ2020-06-23T23:14:39.629509973Zā€ level=info msg=ā€œloading plugin ā€œio.containerd.monitor.v1.cgroupsā€ā€¦ā€ type=io.containerd.monitor.v1
38 time=ā€œ2020-06-23T23:14:39.629981908Zā€ level=info msg=ā€œloading plugin ā€œio.containerd.service.v1.tasks-serviceā€ā€¦ā€ type=io.containerd.service.v1
39 time=ā€œ2020-06-23T23:14:39.630021986Zā€ level=info msg=ā€œloading plugin ā€œio.containerd.internal.v1.restartā€ā€¦ā€ type=io.containerd.internal.v1
40 time=ā€œ2020-06-23T23:14:39.630075400Zā€ level=info msg=ā€œloading plugin ā€œio.containerd.grpc.v1.containersā€ā€¦ā€ type=io.containerd.grpc.v1
41 time=ā€œ2020-06-23T23:14:39.630091176Zā€ level=info msg=ā€œloading plugin ā€œio.containerd.grpc.v1.contentā€ā€¦ā€ type=io.containerd.grpc.v1
42 time=ā€œ2020-06-23T23:14:39.630104663Zā€ level=info msg=ā€œloading plugin ā€œio.containerd.grpc.v1.diffā€ā€¦ā€ type=io.containerd.grpc.v1
43 time=ā€œ2020-06-23T23:14:39.630167222Zā€ level=info msg=ā€œloading plugin ā€œio.containerd.grpc.v1.eventsā€ā€¦ā€ type=io.containerd.grpc.v1
44 time=ā€œ2020-06-23T23:14:39.630183988Zā€ level=info msg=ā€œloading plugin ā€œio.containerd.grpc.v1.healthcheckā€ā€¦ā€ type=io.containerd.grpc.v1
45 time=ā€œ2020-06-23T23:14:39.630198287Zā€ level=info msg=ā€œloading plugin ā€œio.containerd.grpc.v1.imagesā€ā€¦ā€ type=io.containerd.grpc.v1
46 time=ā€œ2020-06-23T23:14:39.630212407Zā€ level=info msg=ā€œloading plugin ā€œio.containerd.grpc.v1.leasesā€ā€¦ā€ type=io.containerd.grpc.v1
47 time=ā€œ2020-06-23T23:14:39.630225858Zā€ level=info msg=ā€œloading plugin ā€œio.containerd.grpc.v1.namespacesā€ā€¦ā€ type=io.containerd.grpc.v1
48 time=ā€œ2020-06-23T23:14:39.630241473Zā€ level=info msg=ā€œloading plugin ā€œio.containerd.internal.v1.optā€ā€¦ā€ type=io.containerd.internal.v1
49 time=ā€œ2020-06-23T23:14:39.630775971Zā€ level=info msg=ā€œloading plugin ā€œio.containerd.grpc.v1.snapshotsā€ā€¦ā€ type=io.containerd.grpc.v1
50 time=ā€œ2020-06-23T23:14:39.630808635Zā€ level=info msg=ā€œloading plugin ā€œio.containerd.grpc.v1.tasksā€ā€¦ā€ type=io.containerd.grpc.v1
51 time=ā€œ2020-06-23T23:14:39.630823755Zā€ level=info msg=ā€œloading plugin ā€œio.containerd.grpc.v1.versionā€ā€¦ā€ type=io.containerd.grpc.v1
52 time=ā€œ2020-06-23T23:14:39.630837937Zā€ level=info msg=ā€œloading plugin ā€œio.containerd.grpc.v1.introspectionā€ā€¦ā€ type=io.containerd.grpc.v1
53 time=ā€œ2020-06-23T23:14:39.631236295Zā€ level=info msg=servingā€¦ address=ā€œ/var/run/docker/containerd/containerd-debug.sockā€
54 time=ā€œ2020-06-23T23:14:39.631335174Zā€ level=info msg=servingā€¦ address=ā€œ/var/run/docker/containerd/containerd.sockā€
55 time=ā€œ2020-06-23T23:14:39.631352859Zā€ level=info msg=ā€œcontainerd successfully booted in 0.019879sā€
56 time=ā€œ2020-06-23T23:14:39.639920173Zā€ level=info msg=ā€œparsed scheme: "unix"ā€ module=grpc
57 time=ā€œ2020-06-23T23:14:39.639960571Zā€ level=info msg=ā€œscheme "unix" not registered, fallback to default schemeā€ module=grpc
58 time=ā€œ2020-06-23T23:14:39.639985896Zā€ level=info msg=ā€œccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0 }] }ā€ module=grpc
59 time=ā€œ2020-06-23T23:14:39.639998454Zā€ level=info msg=ā€œClientConn switching balancer to "pick_first"ā€ module=grpc
60 time=ā€œ2020-06-23T23:14:39.641054038Zā€ level=info msg=ā€œparsed scheme: "unix"ā€ module=grpc
61 time=ā€œ2020-06-23T23:14:39.641082435Zā€ level=info msg=ā€œscheme "unix" not registered, fallback to default schemeā€ module=grpc
62 time=ā€œ2020-06-23T23:14:39.641104996Zā€ level=info msg=ā€œccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0 }] }ā€ module=grpc
63 time=ā€œ2020-06-23T23:14:39.641164098Zā€ level=info msg=ā€œClientConn switching balancer to "pick_first"ā€ module=grpc
64 time=ā€œ2020-06-23T23:14:39.642676887Zā€ level=error msg=ā€œNo zfs dataset found for rootā€ backingFS=extfs root=/var/lib/docker storage-driver=zfs
65 time=ā€œ2020-06-23T23:14:39.671801221Zā€ level=warning msg=ā€œYour kernel does not support swap memory limitā€
66 time=ā€œ2020-06-23T23:14:39.671836411Zā€ level=warning msg=ā€œYour kernel does not support cgroup rt periodā€
67 time=ā€œ2020-06-23T23:14:39.671844903Zā€ level=warning msg=ā€œYour kernel does not support cgroup rt runtimeā€
68 time=ā€œ2020-06-23T23:14:39.671850645Zā€ level=warning msg=ā€œYour kernel does not support cgroup blkio weightā€
69 time=ā€œ2020-06-23T23:14:39.671860895Zā€ level=warning msg=ā€œYour kernel does not support cgroup blkio weight_deviceā€
70 time=ā€œ2020-06-23T23:14:39.672410332Zā€ level=info msg=ā€œLoading containers: start.ā€
71 time=ā€œ2020-06-23T23:14:39.732348361Zā€ level=info msg=ā€œDefault bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP addressā€
72 time=ā€œ2020-06-23T23:14:39.764680340Zā€ level=info msg=ā€œLoading containers: done.ā€
73 time=ā€œ2020-06-23T23:14:39.775296436Zā€ level=info msg=ā€œDocker daemonā€ commit=afacb8b7f0 graphdriver(s)=overlay2 version=19.03.8
74 time=ā€œ2020-06-23T23:14:39.775518256Zā€ level=info msg=ā€œDaemon has completed initializationā€
75 time=ā€œ2020-06-23T23:14:39.808804189Zā€ level=info msg=ā€œAPI listen on /var/run/docker.sockā€
76 time=ā€œ2020-06-23T23:14:39.949008471Zā€ level=error msg=ā€œHandler for POST /v1.40/auth returned error: Get Artifact Registry documentation  |  Google Cloud unauthorized: authentication failedā€
77 time=ā€œ2020-06-23T23:14:39Zā€ level=fatal msg=ā€œError authenticating: exit status 1ā€

@manobi this issue was for the 1.0 release candidate (and is quite old). There are no known issues with using the docker plugin if you are using the latest version of the kubernetets runner or docker runner (providing your yaml is generally recommended so we can advise further). Make sure you are providing the plugin with your registry credentials via the username and password attributes as shown here: http://plugins.drone.io/drone-plugins/drone-docker/

Thanks @bradrydzewski, Iā€™m already declaring my credentials, it might be something with the new Google Artifact registry.

I could not make it work with plugins/docker, which Iā€™ve had beeing using for 6 months with Oracle container registry.
Unfortunately I had to move to plugins/gcr now itā€™s working, Iā€™m migrating more repos today and will again and post my pipeline here.

The first does not work and last works as expected (please click on the link to see the original with credentials, which donā€™t appear in preview).
Do you think itā€™s how my ā€œkeyā€ is formated someway?