We have a company n8n setup both in staging and production. I’m trying to install a local package and have had success locally, but no success in staging.
- I’m installing a local library by running in the dockerfile
COPY ./mymodule/ /data/mymodule
RUN npm_config_user=root npm install -g /data/mymodule
- I’ve also added
docker-compose.yaml and in
When I spin up a local docker container, I can require the library no problem using
mymodule = require("mymodule")
Yet, when I deploy this to our staging environment, with the same import I get an error:
ERROR: Cannot find module 'mymodule' [Line 110]
I’ve even tried to set
/staging/secrets.yaml but I still get the same error.
It doesn’t seem an installation error, because I can see the
mymodule folder in
/usr/local/lib/node_modules in staging.
Information on my n8n setup
- n8n version: 0.164.1
- Running n8n via Docker installing packages with npm
Are there any differences between staging and prod? I know sometimes using the -g option can cause problems, Have you tried adding your module to the package.json in the nodes-base folder to see if that helps?
Hey @jon ,
Thank you for coming back to me on this. I’m a bit of a newbie and not in charge of the repo. There is a different folder for production, but I haven’t tried anything in production yet, since I haven’t gotten staging to work.
Where is the package.json of nodes-base usually?
I forgot to add that I can see the
mymodule folder with the correct content successfully installed into
/usr/local/lib/node_modules/mymodule in staging (as well as in my local Docker where the import works).
It working on the local docker is interesting, In theory if it works there it should work on everything which is one of the plus sides of using containers.
If you delete your local image and run the same process again does it work?
Hey @jon ,
Yes, that’s what’s been baffling me
Particularly with the folder actually being in
/usr/local/node_modules/ in staging.
I deleted both the containers and volumes and built them again with docker-compose and it works again in my local Docker.
That is a bit strange, When setting the
NODE_FUNCTION_ALLOW_EXTERNAL option for staging did you rebuild the container after doing it?
Hey @jon ,
That’s a great question. I’m deploying things to staging using a CI/CD pipeline that the repo manager created (he’s currently OoO).
Running with gitlab-runner 14.9.1
Preparing the "docker" executor
Using Docker executor with image XXX
Authenticating with credentials from /.../config.json
Pulling docker image XXX ...
Using docker image XXX with digest XXX ...
Getting source from Git repository
Fetching changes ...
Initialized empty Git repository in /.../n8n/.git/
Created fresh repository.
Checking out commitX as refs/merge-requests/X/head...
Skipping Git submodules setup
Downloading artifacts for Kubernetes Policy Check ...
Downloading artifacts from coordinator...
Using docker image XXX
From my layman’s eyes, that seems to indicate a rebuild, no? I can confirm with engineers at my company.
It looks like it could be, As a test if you pop open a function node and pop in the below and give it a run does the browser console show the correct env option?
That shows that it hasn’t been updated and that’s definitely causing the issue. Thank you!!
No problem To be honest I wasn’t sure what the next step would be so this gives me some thinking time just incase this doesn’t solve it.
Hi @jon ,
A colleague engineer updated the environment variable manually and it worked
So we need to update the CI/CD pipeline to update those env variables. Unrelated to n8n
That is good news thanks for letting me know.