I have been trying to use caching to avoid npm -i starting from scratch every build but having no luck.
If I put RUN ls -la in my dockerfile before running npm -i I can see that node_modules doesn’t exist. Neither does /var/lb/docker
BUT If I put the ls commands in .drone.yml file, the directories exist and get listed out (but of course then it doesn’t run my docker file, and defeats the purpose)
From what I can see in your yaml, you appear to be using the cache plugin to restore your cache from the host to the node_modules directory, which looks fine.
What looks problematic to me is that you mount /drone/src/node_modules in your docker step to a host volume, which overwrites the existing node_modules directory with the host directory, and negates your previous cache restore.
I cannot see a reason to mount host volumes into your docker step, so removing both volume mounts is recommended. Note that the docker plugin uses docker-in-docker and therefore does not require mounting the host volume.
Thanks for the reply. I removed those entries, but it is not caching anything. I think I just don’t have a good understanding of how the pieces fit together. Here’s my updated conf:
FROM node:lts-alpine3.14 as base
WORKDIR /drone/src
COPY . .
RUN npm i
RUN npx nx build www
RUN npx nx build api
CMD ["node", "./dist/apps/api/main.js"]
I think perhaps there is a minor misunderstanding with regards to how the docker plugin and volumes work in drone. The docker plugin runs docker-in-docker. This means the docker build is a container running in a container. The /drone/src volume is mounted to the outer container (plugins/docker) but not the inner container (dockerfile).
I definitely agree that I don’t understand how the layering is working. So that being said, I’m trying to accomplish 2 things:
I don’t want to re-download docker images on every build
I don’t want npm -i in my Dockerfile to re-download all my deps for every build.
I think #1 is working. But how can I accomplish #2? I want to retain my node_modules between builds while being able to push a docker image to my private repo.
I don’t want to re-download docker images on every build
The docker plugin is ephemeral and does have access to the host machine docker cache, for isolation and security reasons. If you need access to the host machine docker cache (for image caching, layer caching, etc) this particular plugin is not going to be a good fit. The docker plugin is not mandatory; you can use other plugins or even shell scripting. See this previous thread for further explanation:
I don’t want npm -i in my Dockerfile to re-download all my deps for every build.
I think I see the disconnect. Docker builds your image in a temporary directory (note that this is a bit of an oversimplification). This means that under the hood, the COPY directive in your Dockerfile is copying /drone/src into the temporary directory. Therefore running npm install inside the Dockerfile is going to download your node_modules to the temporary directory, not to /drone/src/node_modules.
If you want to cache node_modules, you need to download your dependencies outside of your Dockerfile. Then you can use the ADD or COPY directive to add the node_modules folder to your Dockerfile. This eliminates the need for RUN npm i.
Thanks for the help…Getting closer. The cache now works. I’m actually currently running my build command outside the docker plugin (nx build). So far so good, it’s shaved about 11 minutes off.
But my build process puts everything into a “dist” folder that I want to COPY in my Dockerfile. But it doesn’t exist, or is somewhere I can’t find it.
Where do I COPY from in the Dockerfile to access either node_modules or dist?
unfortunately without the ability to reproduce (access to source code and a sample pipeline) it is very difficult to advise on these matters. If you post a simple, sample project that can be used to reproduce, it may increase your chances that someone from the community will volunteer their time and help you come up with a solution.
I am facing a similar issue. No matter what I do, I cannot get to use the volume directory with plugins/docker in the example below. @bradrydzewski can you shed some light here?
Correct, when your run docker build, the docker client bundles all your files and folders in a tar file, excluding anything in dockerignore, and sends them to the docker daemon over the docker socket. The docker daemon then unpacks these files and folders to a temporary directory, and then executes your Dockerfile directives. Files and folders excluded by dockerignore are not bundled in the tar, and therefore cannot be reference by ADD or COPY directives.
Unfortunately the official Docker documentation on this topic is sparse, so here are some useful articles that it explain this concept further: