External crypto packages not working, others do

Hello n8n Community,

I’ve been encountering persistent issues specifically with crypto packages (like ethers.js, web3.js, and bitcoinjs-lib) in my n8n setup within a Docker container. Non-crypto packages like lodash and axios work perfectly fine, indicating that the environment is set up correctly for external modules in some cases, but not all. I would appreciate any insights or suggestions to resolve this.

What i’ve done so far
Installed Multiple Packages*: Installed five npm packages (ethers, web3, axios, lodash, bitcoinjs-lib) in various locations (each time going further with troubleshooting):

  • Locally in the ~/.n8n/nodes directory.
  • In the ~/.n8n/nodes/node_modules directory.
  • In the ~/.n8n/custom directory.
  • Globally in the Docker container using npm install -g.
  1. Set Environment Variables: Configured NODE_FUNCTION_ALLOW_EXTERNAL in the docker-compose.yml file to include all these packages, and even tried allowing all external modules with *. Additionally, tried setting N8N_CUSTOM_EXTENSIONS to ~/.n8n/nodes as well as ~/.n8n/nodes/node_modules and ~/.n8n/custom’.
  2. Testing and Verification:
  • Verified that non-crypto packages (lodash, axios) work as expected in n8n’s Function nodes when included in NODE_FUNCTION_ALLOW_EXTERNAL. Removing them from this variable results in the expected “module not found” errors, confirming that the setup is generally working.
  • Ran a simple Node.js script directly in the Docker container’s environment to confirm that ethers.js and other packages are operational outside of n8n. This test was successful, indicating the issues are specific to n8n’s integration or configuration.
  1. Permissions and Persistence:
  • Addressed any permissions issues when attempting global installations.
  • Ensured that the ~/.n8n directory is mapped to a persistent Docker volume.

The Issue:

Despite the general setup working for non-crypto packages, all crypto-related packages continually result in the following error within n8n: ERROR: Cannot find module 'ethers' [line 1]. This occurs even when the packages are confirmed to be installed and the environment variables are set to allow them.

Seeking Help:

Has anyone in the community faced and resolved similar issues with crypto packages in n8n, particularly within a Docker environment? Any advice on additional steps to troubleshoot or configure the environment would be highly valuable. Here’s what I am particularly curious about:

  • Are there known issues or additional considerations with crypto packages like ethers.js in n8n?
  • Might there be additional security or permission layers in Docker affecting access to these packages?

I appreciate any insights or suggestions the community can offer. Thank you for your help!

Current env:
~/.n8n/nodes/node_modules $ env
N8N_HOST=
NODE_FUNCTION_ALLOW_EXTERNAL=*
NODE_VERSION=18.18.2
HOSTNAME=b98f7a099c23
YARN_VERSION=1.22.19
NODE_FUNCTION_ALLOW_BUILTIN=*
SHLVL=1
HOME=/home/node
OLDPWD=/home/node/.n8n/nodes
N8N_PORT=5678
NODE_ICU_DATA=/usr/local/lib/node_modules/full-icu
TERM=xterm
GENERIC_TIMEZONE=Europe/Amsterdam
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
N8N_CUSTOM_EXTENSIONS=/home/node/.n8n/custom
N8N_RELEASE_TYPE=stable
PWD=/home/node/.n8n/nodes/node_modules
WEBHOOK_URL=
N8N_PROTOCOL=https
NODE_ENV=production
N8N_VERSION=1.21.1
~/.n8n/nodes/node_modules $ n8n -v
1.21.1
~/.n8n/nodes/node_modules $ node -v
v18.18.2

Information on your n8n setup

  • n8n version:1.21.1
  • Database (default: SQLite):default
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via Docker
  • Operating system: Ubuntu v22

It looks like your topic is missing some important information. Could you provide the following if applicable.

  • n8n version:
  • Database (default: SQLite):
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app):
  • Operating system:
1 Like

Hey @AAQ,

Welcome to the community :cake:

To use external packages you would need install them globally in the docker container then set the allow_external env option.

Can you share how you attempted to install them globally? I have quickly given it a test using a Dockerfile and it appears to be working as expected for other packages like tesseract.

Hello Jon,

Thank you for your replying.

Besides this, i have also installed ethers normally in multiple locations as i mentioned before.
Also, in the previous post you can see that the external env is setup correctly, i guess?

Here is the docker-compose.yml file:

GNU nano 6.2 docker-compose.yml
version: “3.7”

services:
caddy:
image: caddy:latest
restart: unless-stopped
ports:
- “80:80”
- “443:443”
volumes:
- caddy_data:/data
- ${DATA_FOLDER}/caddy_config:/config
- ${DATA_FOLDER}/caddy_config/Caddyfile:/etc/caddy/Caddyfile

n8n:
image: docker.n8n.io/n8nio/n8n
restart: always
ports:
- 5678:5678
environment:
- N8N_HOST=${SUBDOMAIN}.${DOMAIN_NAME}
- N8N_PORT=5678
- N8N_PROTOCOL=https
- NODE_ENV=production
- WEBHOOK_URL=https://${SUBDOMAIN}.${DOMAIN_NAME}/
- GENERIC_TIMEZONE=${GENERIC_TIMEZONE}
- NODE_FUNCTION_ALLOW_EXTERNAL=*
- NODE_FUNCTION_ALLOW_BUILTIN=*
- N8N_CUSTOM_EXTENSIONS=/home/node/.n8n/custom
volumes:
- n8n_data:/home/node/.n8n
- ${DATA_FOLDER}/local_files:/files
volumes:
caddy_data:
external: true
n8n_data:
external: true

And lastly, here is what you see when trying to run it in n8n:

That is “sadly” not how Docker works. You have to use the currently running container and then create a new image from it. You can find some information about how that can e done here:

Anyway, that is still dirty. The best is to create a custom Docker image via a Dockerfile and use the n8n image as a base. Here is one of many examples in the forum where somebody does something like that:

Hi @jan,

I had this same issue sometime ago, could fix it on windows but not on linux. I don’t use a docker version but the a simple NPM installed version of n8n, as recreating a docker image every single time is quite teadious.

Any solution for the npm installed version instead of docker?

Hello @jan Thank you so much for your reply.

I’ve done what you (the articles) told me to do, created a Dockerfile with the following in it:

FROM n8nio/n8n

USER root

RUN npm install -g ethers
RUN npm install -g solc
RUN npm install -g web3
RUN npm install -g mythxjs

Then added my new image with docker build -t n8n-crypto .
This n8n-crypto was then added to my docker-compose.yml file:

  n8n:
    image: n8n-crypto
    restart: always

After restarting it, i was redirected to the setup again, did that and it worked! The test for ethers worked just fine.
Thank you so much :pray:

But… whenever i restart the server / Docker, I get redirected to the setup again.
Any idea what causes this?

Yup, i also tried it with PM2 before, but then I read somewhere that n8n officially supports/recommends the Docker install. Am I right @jan?
I prefer the npm/pm2 install as well to be honest.

Hey @AAQ,

You need ot change the User back to Node so your Dockerfile would look like this

FROM n8nio/n8n

USER root

RUN npm install -g ethers
RUN npm install -g solc
RUN npm install -g web3
RUN npm install -g mythxjs

USER node

Then in your compose file as long as you have correctly mounted a volume for /home/node/.n8n it will keep the data.

You got it, Officially we recommend the Docker approach as it makes life a lot easier and there are fewer outside factors that can cause problems.

Hi @Jon,

I have been working with @AAQ on the issue and we got it working in docker with custom npm packages. The only issue is, data is not being persisant, meaning if we redeploy we get back to the setup of n8n again.

docker-compose.yml

version: '3.8'

services:
  n8n:
    image: ghcr.io/***/n8n:latest
    restart: always
    ports:
      - '5678:5678'
    volumes:
      - n8n_data:/home/node/.n8n
  caddy:
    image: ghcr.io/***/caddy:latest
    restart: always
    ports:
      - '80:80'
      - '443:443'
    command: caddy reverse-proxy --from subdomain.domain.com --to n8n:5678
    volumes:
      - caddy_data:/data
      - caddy_config:/config

volumes:
  caddy_data:
  caddy_config:
  n8n_data:

The code used for deployment:

      echo $(githubPAT) | docker login ghcr.io -u $(githubUsername) --password-stdin
      export COMPOSE_PROJECT_NAME=n8n-docker
      docker compose -f $(serverPath)/docker-compose.yml pull
      docker compose -f $(serverPath)/docker-compose.yml down
      docker compose -f $(serverPath)/docker-compose.yml up -d

I see that the volumes are created nicely in: /var/lib/docker/volumes/ for which /var/lib/docker/volumes/n8n-docker_n8n_data/_data is present.

@Jon I also checked by going into the running container and running:

docker exec -it [n8n-container-name] /bin/sh
touch /home/node/.n8n/testfile

I do see the file being copied to /var/lib/docker/volumes/n8n-docker_n8n_data/_data. Yet nothing else then my test file. I did create a workflow etc to test with after going through the setup again.

I also checked the docker logs, which don’t give any error what so ever.

I do see the sqllite db inside of the container at: /root/.n8n

Would that mean I have to change it to:

services:
  n8n:
    image: ghcr.io/***/n8n:latest
    restart: always
    ports:
      - '5678:5678'
    volumes:
      - n8n_data:/root/.n8n

I will test it in the meanwhile, but the progress and example will be nice for the forum.

EDIT:

Chaning it to:

volumes:
      - n8n_data:/root/.n8n

This did the trick!, its now persistant as the sqllite db is being copied!

One more piece now remains @Jon. We got a lot of workflows, which we currently save manually and upload to our git, as we don’t have the enterprise n8n version. If there away to import these via a command or api?

Hey @Bryce,

You shouldn’t be using /root/.n8n which would suggest your docker image is likely not finishing with USER node to set the correct user as included in the example I posted above.

You can use the n8n cli or the API to import workflows, You could also automate exporting the workflows and saving them to git with an n8n workflow.

Hi @Jon I tested it with USER node, but then the docker image is not started. As everything is done on the VPS with the Root account, this was the only way to get it working.

For the workflows exports/imports I used the following, which worked for me:

Exports:

      docker exec -u root n8n-docker-n8n-1 mkdir -p /root/.n8n/backups/latest
      docker exec -u root n8n-docker-n8n-1 sh -c 'cd /root/.n8n && n8n export:workflow --all --output=backups/latest/workflows.json'
      docker exec -u root n8n-docker-n8n-1 sh -c 'cd /root/.n8n && n8n export:credentials --all --output=backups/latest/credentials.json'

Imports:

      docker exec -u root n8n-docker-n8n-1 sh -c 'cd /root/.n8n && n8n import:workflow --input=backups/latest/workflows.json'
      docker exec -u root n8n-docker-n8n-1 sh -c 'cd /root/.n8n && n8n import:credentials --input=backups/latest/credentials.json'

Hey @Bryce,

Everything being done with root shouldn’t cause any issues so I suspect there is possibly more to it and maybe you were running into the old permission issues we would see from when we changed the user.