Error response from daemon: devmapper

Got this error today:

Error response from daemon: devmapper: Thin Pool has 4348 free data blocks which is less than minimum required 4449 free data blocks. Create more free space in thin pool or use dm.min_free_space option to change behavior

Anyone know how to fix it? Devmapper is the default storage driver for docker. Should we change to others? What will you recommend?

I also used to have this error (among others) when running on ECS.

Taking the following steps seems to have solved it:

  • using the latest plugins/ecr (or plugins/docker), which executes a “docker system prune” at the end of the build
  • using a single agent per host (we’re using c5.large instances now because of a couple of very intensive builds, but in most cases you could probably get away with something smaller)
  • using the host daemon to build images (i.e. sharing the docker socket as a volume and passing daemon_off to the docker/ecr plugin)

This final bullet point is important if you’re running on AWS ECS optimized instances, due to the way their docker storage is configured: I’ve found out the hard way that running docker-in-docker does not play well with devicemapper.

I’ve also increased the ECS optimized instance docker storage volume from 22Gb to 33Gb just to be sure, but this might not have been necessary.

1 Like

Curently in AWS ECS host, the original docker-storage is:

$ cat /etc/sysconfig/docker-storage
DOCKER_STORAGE_OPTIONS="--storage-driver devicemapper --storage-opt dm.thinpooldev=/dev/mapper/docker-docker--pool --storage-opt dm.use_deferred_removal=true --storage-opt dm.use_deferred_deletion=true --storage-opt dm.fs=ext4 --storage-opt dm.use_deferred_deletion=true"

One more thing which I’m not sure is relevant.
My current deployed design is having drone agents as ECS tasks, while the drone server is a separated Elastic Beanstalk Application under an ELB (in TCP mode to allow communication).

1 Like

Yeah. I find that the AWS ECS optimized AMI has a very peculiar configuration: AMI storage configuration - Amazon Elastic Container Service

Amazon ECS-optimized AMIs from version 2015.09.d and later launch with an 8-GiB volume for the operating system that is attached at /dev/xvda and mounted as the root of the file system. There is an additional 22-GiB volume that is attached at /dev/xvdcz that Docker uses for image and metadata storage. The volume is configured as a Logical Volume Management (LVM) device and it is accessed directly by Docker via the devicemapper backend. Because the volume is not mounted, you cannot use standard storage information commands (such as df -h) to determine the available storage.

We had all sort of issues because of this. Since devicemapper is not supported by dind, our Drone builds would not use the separate Docker volume, and soon we wouldn’t have any space left on the root volume.

So I suggest you try to:

A) use a different base AMI, which supports overlay or aufs (but it’s going to be harded to integrate it with ECS)

B) mount the host docker socket, so you do not need dind and can inherit the device mapper configuration:

  publish:
    image: plugins/docker
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
    ...

This requires you to enable the “Trusted” option on the Drone UI.

We’ve been using this configuration to make a rather large number of daily builds without problems so far. There might be some security risks running it on a public server or for a open-source repo though, as I think different builds using the same socket could have access to each other docker registry credentials. But I haven’t seen any drawback when running like this on a private server so far.

1 Like