This is a pretty hot topic, but I’ve never actually found a solution.
As you probably know, when we have a volume in a container and we install the dependencies (with npm i
or something) from a Dockerfile (with default perms), Npm will create a node_modules
folder in the container with root:root
access.
I’m facing two issues with this method (on a local/dev environment):
The
node_modules
folder only exists inside the container, but the host’s IDE/LSPs needs this folder to work properly (module imports, type definitions, etc).If the host wants to install/update a package (
npm i ...
, etc) he will have to restart and rebuild the container for thenode_modules
folder to be updated.
So I came up with another idea, what if I install the dependencies using CMD
in a Dockerfile (or the command
property of a service in a docker-compose
file) and use a volume so the node_modules
can be shared with the host. Unfortunately, this method introduces new issues. For instance, the node_modules
has a root:root
permissions access, so if your host’s username is “named” otherwise and doesn’t have the same uid
& gid
you will need to run root access commands update the node_modules
(sudo npm i ...
).
Here is my current config:
docker-compose.yml
:
version: '3.7' services: app: container_name: 'app_DEV' build: . command: sh -c "yarn install && node ./server.js" volumes: - ./:/usr/src/app ports: - 3000:3000 tty: true
Dockerfile
:
FROM node:12.8.1-alpine WORKDIR /usr/src/app COPY . .
package.json
:
{ "dependencies": { "express": "^4.17.1" } }
server.js
:
const app = require('express')(); app.get('/', (req, res) => { res.send('Hello'); }); app.listen(3000, () => console.log('App is listening on port 3000'));
Then you can try to run docker-compose up
and do a ls -la
:
-rw-r--r-- 1 mint mint 215 août 23 16:39 docker-compose.yml -rw-r--r-- 1 mint mint 56 août 23 16:29 Dockerfile drwxr-xr-x 52 root root 4096 août 23 16:31 node_modules -rw-r--r-- 1 mint mint 53 août 23 16:31 package.json -rw-r--r-- 1 mint mint 160 août 23 16:29 server.js
As you can see every files/folders have mint:mint
access except node_modules
(mint
is my host’s user).
So to sum up my question: is there a better way to manage NodeJS dependencies with Docker containers?
Advertisement
Answer
A few years have pasted since I originally wrote this question. I wanted to come back and share a different opinion, since my POV has changed a bit since then, and I now think the way I wanted to use containers is incorrect.
First of all, pretty much any file/folder created in a container shouldn’t be altered outside this same container.
In the context of this post, any command altering the node_modules
folder should be run from within the container. I understand it can be a bit cumbersome, but I think it’s fine as long as you use docker-compose (e.g. docker-compose exec app npm i
).
I think it fits better the way OCI containers are intended to be used.
On the OS compatibility side, since everything (dev environment related) should be done from inside the container, there shouldn’t be any issue. Note that I’ve seen organizations distributing dev images both with uninstalled and preinstalled dependencies. I think both ways are fine, it just really depends on whether you want a lightweight dev image or not.