I’m having a bit of trouble trying to get my scenario working.
I’ve got integration tests that I want to run but they depend on having a postgres db up and running and that my migrations have been applied to the database.
The trouble is that our migrations are run in docker containers(They are run as init containers in k8s when deployed). So the steps are:
- Start postgres instance
- Apply migrations(docker containers)
- Run integration tests
- Tear down postgres instance
Is there any way to do this in drone without doing some crazy workarounds? I’ve tried running a drone service with postgres but the migration containers can’t access the db via the service name. I’ve also tried mounting the docker socket and I’m getting the same results.
We would probably need to see a sample project, posted to github, that demonstrates the problem in order to better offer suggestions. The sample project should be simple enough to read and understand in a few minutes, should isolate the problem, and should not include any non-essential code or files. If you can provide such a sample project, I can take a quick look and offer suggestions.
Alternatively we provide enterprise support and consulting https://drone.io/enterprise/
Hey @bradrydzewski thanks for a quick response. I guess what the main issue is that I can’t find a way for docker-in-docker containers to access services
Example:
pipeline:
ping_works:
image: postgres
commands:
# wait for postgres service to become available
- |
until psql -U postgres -d postgres -h postgres \
-c "SELECT 1;" >/dev/null 2>&1; do sleep 1; done
# query the database
- |
psql -U postgres -d postgres -h postgres \
-c "SELECT * FROM pg_catalog.pg_tables;"
ping_does_not_work:
image: docker
volumes:
- /var/run/docker.sock:/var/run/docker.sock
commands:
- docker ps -a
- docker run --rm postgres:latest psql -U postgres -d postgres -h postgres -c "SELECT * FROM pg_catalog.pg_tables;"
services:
postgres:
image: postgres
environment:
- POSTGRES_USER=postgres
- POSTGRES_DB=postgres
the recommended approach would be to run the commands in your step:
- ping_does_not_work:
- image: docker
- volumes:
- - /var/run/docker.sock:/var/run/docker.sock
- commands:
- - docker ps -a
- - docker run --rm postgres:latest psql -U postgres -d postgres -h postgres -c "SELECT * FROM pg_catalog.pg_tables;"
+ ping_does_not_work:
+ image: postgres
+ commands:
+ - psql -U postgres -d postgres -h postgres -c "SELECT * FROM pg_catalog.pg_tables;"
Thanks for the suggestion @bradrydzewski but that would not work in my case. I would be running my migration containers instead of the postgres container in the ping_does_not_work step.
I found out that if I add --network= to the docker run command then I can access the postgres service via it’s name. But the network name changes in each build.
Can I suggest that the network name could be a environment variable that is available in every step?
what about running your migration container as an actual pipeline step? By allowing drone to launch the container, it will ensure it is a member of the correct network.
pipeline:
migration:
image: my-migration-container
services:
postgres:
image: postgres
environment:
- POSTGRES_USER=postgres
- POSTGRES_DB=postgres
That would not be a good solution in my case because we’ve got multiple migration containers and we have a convention in our monorepo that makes it possible to create new microservices and migrations for the new services without having to change the CI pipeline.
It’s really specific for our repository so I can’t really share a detailed example. But making it easy for docker-in-docker containers to join the drone network would go a long way.
I managed to get the migrations running by using the following snippet: DRONE_NETWORK=$(docker network ls | grep _default | awk '{print $2}')
and then passing the DRONE_NETWORK
variable to my migration script