Starting services after Drone pipeline?

I’m finding it difficult to locate consistent documentation on enabling the compose or services section of the .drone.yml file with regard to inheriting services based on a docker-compose.yml file.

I am aware that Drone attempts to be a superset of docker-compose.yml file with the added ‘pipeline’ feature.

However, in GH issues:

  • github . com/drone/drone/issues/902#issuecomment-76745751
  • github . com/drone/drone/issues/906#issue-59829409

And in the examples:

  • github . com/drone/drone-yaml-v1/tree/master/samples
  • docs.drone . io/services/

There is inconsistency in the syntax with little mention of how to ‘import’ docker-compose.yml. Additionally, @bradrydzewski has also said “We do not have exact feature parity with docker-compose yet but feel free to open issues for specific discrepancies that you identify.”

My question is, which of the two syntax’s is correct – what will allow me to start services following the end of a pipeline?

Additionally, is there full feature parity if docker-compose.yml is a .drone.yml subset? Can I use service labels, volumes, networks, ports?

I’m testing the latest version of Drone, 0.8.4.

Thanks :slight_smile:

(P.S. links are disabled due to new account)

1 Like

There is inconsistency in the syntax

I am not sure I fully understand.

Those github issues are from 2015 and are no longer accurate. The official documentation and source code should be considered the canonical source of information for the yaml configuration.

With little mention of how to ‘import’ docker-compose.yml

I am not sure I understand what is meant by “import”. But there is no mention of importing a docker-compose yaml because it is not possible.

Additionally, is there full feature parity if docker-compose.yml is a .drone.yml subset? Can I use service labels, volumes, networks, ports?

You can see a full list of supported fields here https://github.com/drone/drone-yaml-v1/blob/master/yaml/container.go#L16:L53

My question is, which of the two syntax’s is correct

sorry, I am not sure I fully understand. Which two syntax are we referring to?

what will allow me to start services following the end of a pipeline?

If I understand correctly, you want to run docker compose at the end of your build? The question is, where do you want to start your services? If you want them to run on the same machine as the agent, you would just do this:

pipeline:
  ...
  start:
    image: docker/compose
    commands:
      - docker compose up -d
    volume:
      - /var/run/docker.sock:/var/run/docker.sock

Note that the pipeline is just executing shell commands, which means you can script your pipeline and run whatever commands you like. The only practical difference between a drone pipeline step and a shell script, is that the commands are running inside a docker container.

Thanks for the swift reply.

To address the inconsistency issue (importing and with regard to which is the correct syntax), i’m referring to samples which either indicate:

  1. compose:
      from-file: ./docker-compose.yml
    

    ref: More flexible Yaml with Matrix · Issue #902 · harness/gitness · GitHub

  2. compose:
     from: ./dockse-compose.yml
    

    ref: Import docker-compose.yml for service container orchestration · Issue #906 · harness/gitness · GitHub

  3. services:
       #how do i import here?
    

However it looks as if 1 and 2 seem to be deprecated.

If I understand correctly, you want to run docker compose at the end of your build?

Yes. Should a separate services section be created, like so:

pipeline:
  # build, etc.
services:
  # initialise services here?

or, as you have just mentioned, should it be as part of the pipeline?

To address the inconsistency issue (importing and with regard to which is the correct syntax), i’m referring to samples which either indicate:

Drone does not support the from-file attribute to import additional compose files. This may be added in the future, but it is not currently possible.

However it looks as if 1 and 2 seem to be deprecated.

1 and 2 were never possible. These were just discussion comments, but were never implemented.

Yes. Should a separate services section be created
or, as you have just mentioned, should it be as part of the pipeline?

It is hard for me to answer this question because I’m not entirely sure what you are trying to do. The problem described in this thread is a bit abstract. Seeing a more concrete, real world example might help. I typically recommend people create a simple example that demonstrates the problem they are trying to solve, and post to GitHub (I emphasize simple, because I don’t have time to parse through complex examples)

These were just discussion comments, but were never implemented.

This explains why they didn’t work!

Seeing a more concrete, real world example might help.

As a real-world example, I’d have a pipeline which builds, tests and deploys a repository containing multiple services. A local Dockerfile and docker-compose.yml file are some of the contents of this repository, where the pipeline’s results from of build and test are later included in the Dockerfile build itself using a shared volume or workspace.

The machine which also has the Drone agent would be connected to a docker swarm as a manager or delegator and is capable of moving the services defined in docker-compose.yml to different nodes based on the deploy constraints.

The docker-compose.yml file also contains the recently built image along with additional auxiliary services it may require.

So, an application is pushed through a pipeline with steps build, test, docker_build and deploy. I was wondering where the deploy step would be placed in context to the .drone.yml file.

When designing a pipeline, I find it helpful to first consider how I would solve this problem from the terminal, using simple shell commands:

# compile
$ go build
# test
$ go test
# build image
$ docker build -t some/container
# start image (e.g deploy)
$ docker run some/container

This shell script could be translated to the following yaml:

pipeline:
  compile:
    image: golang
    commands:
      - go build
  test:
    image: golang
    commands:
      - go build
  build:
    image: docker
    commands:
      - docker build -t some/container .
    volume:
      - /var/run/docker.sock:/var/run/docker.sock
  deploy:
    image: docker/compose
    commands:
      - docker compose up -d
    volume:
      - /var/run/docker.sock:/var/run/docker.sock

Note that in the above example we mount the host machine docker sockets. We have to adapt the shell script because the build is running inside a docker environment.

Thanks for this, this has really helped clarify what I was after.

I’ve managed to get the pipeline to deploy a docker-compose service set. I have noticed a few things:

  1. If Drone’s agent is not running in privileged mode, it leaves a hanging container with status “Exited”. After giving drone agent container the --priviledged flag it stops this.
  2. More importantly, mounting a volume in docker-compose.yml results in unexpected behaviour as the drone agent cleans the repository following a successful pipeline. This means that files and subdirectories are left empty, but /drone/src/..etc.. remains present for all volumes mounted.

Two questions:

  1. Can I add a ‘priviledged’ flag to a pipeline step as opposed to a service? (Reference suggests otherwise).

  2. Is there a way to mitigate the deletion of a cloned repository in a workspace once docker-compose up -d is run? Given I’d want to mount the subdirectory config/ in the repository, the following docker-compose.yml:

    version: '3'
    services:
      my-service:
        image: my/service:latest
        volumes:
          - ./config:/var/service/config:rw
    

    Will leave the following directory available but empty on the host system: /drone/src/gitea.example.com/my/service/config/ even if files were originally present.

Can I add a ‘priviledged’ flag to a pipeline step as opposed to a service?

I could answer this, but this seems like something you could very quickly test out yourself :slight_smile:

Is there a way to mitigate the deletion of a cloned repository in a workspace

No, the workspace is always deleted at the conclusion of the build. If you want a file to persist to the host machine, you need to manually copy it to the host machine.

./config:/var/service/config:rw
Will leave the following directory available but empty on the host system: /drone/src/gitea.example.com/my/service/config/ even if files were originally present.

TLDR the host machine docker daemon cannot mount a directory inside your build container …

This configuration you have will not work. You cannot mount a volume using the container-relative path, if you are connecting to the host machine docker daemon. The docker-compose binary you are running will convert ./config to an absolute path which looks something like this /drone/src/github.com/.../.../.config

You have mounted the host machine docker daemon into your build environment. When you tell docker to mount /drone/src/github.com/.../config it will look for this path on the host machine, not inside your container.

This directory does not exist on the host machine at this path. It exists as a mount point on the host machine in a special directory that looks something like this: /var/lib/docker/volumes/6e86a2eeec3f181d42ef1efd789ded1790f081ce9a5edb70370d77ea64af0d62/_data/drone/src/...../.config

For anyone who needs a crude workaround for now to address this, I was able to map volume data from the host container into a sub container by finding the path to the host container volume information. I broke it out into each piece of info you need to build up the path:

Finding the name of the current repo, company, and git provider (ex. github.com, bitbucket.org, etc.)

export REPO=$(basename $PWD)
export COMPANY=$(basename $(dirname $PWD))
export PROVIDER=$(basename $(dirname $(dirname $PWD)))

Finding the id of the current container (i.e. the pipeline container, not the host) from within the container:

export CONTAINER_ID=$(cat /proc/self/cgroup | grep 'docker' | sed 's:^.*\/::' | tail -n1)

Finding the name of the current volume, which will hold that data and path that you need for mounting:

export VOLUME_NAME=$(docker inspect -f '{{.HostConfig.Binds}}' "$CONTAINER_ID" | sed -n 's/\[\([^:]*\)\:.*/\1/p')

Putting it all together, we can grab a full path to the volume data on the host container with something like this:

export VOLUME_PATH="/var/lib/docker/volumes/$VOLUME_NAME/_data/src/$REPO/$COMPANY/$PROVIDER"

Finally once you get the full path then you can use that to update your docker-compose.yml with a standard volumes section. I do a sed replace as a command in my pipeline step to grab the relevant volume information, and update my docker-compose.yml. This has been working well so far:

After the pipeline command runs you end up with a docker-compose.yml that looks something like:

services:
  web:
    env_file: ...
    build: ...
      args:
        ...
    depends_on:
      - db
    volumes:
      - /var/lib/docker/volumes/0_7043445343109745817_default/_data/src/bitbucket.org/mycompany/myrepo/.container/config:/opt/config
      - /var/lib/docker/volumes/0_7043445343109745817_default/_data/src/bitbucket.org/mycompany/myrepo/.container/core:/opt/core
      - /var/lib/docker/volumes/0_7043445343109745817_default/_data/src/bitbucket.org/mycompany/myrepo/.container/enterprise:/opt/enterprise
      - /var/lib/docker/volumes/0_7043445343109745817_default/_data/src/bitbucket.org/mycompany/myrepo/.container/log:/opt/log
1 Like

Hello,

I am using drone0.8 agent and server. I wanted to run integration test using docker-compose file that I have (the part in drone.yaml is exactly the same). But I get the following error:

Error response from daemon: manifest for docker/compose:latest not found

I can run the same command locally and everything work. Could you please tell me what am I doing wrong? I tried looking on dicourse but didn’t find anything related.

Thanks!

Regards,
Rakesh.

manifest for docker/compose:latest not found

I can run the same command locally and everything work.

Because docker/compose:latest exists on your workstation locally, but it doesn’t from drone agent’s side.

image: docker/compose is just an example. You need to change it to an actual name which has docker.