N8n + Docker Swarm + GlusterFS

Hello all.

Currently, i’m running n8n on a single server with docker compose just fine. Next step is to try and deploy it as a stack with postgres and presist data with glusterfs.

Wondering, has anyone tried glusterfs with n8n? I’m getting this permission error:


Summary

2022-09-13T04:07:33.334Z | info | Initializing n8n process “{ file: ‘start.js’ }”
UserSettings were generated and saved to: /mnt/gluster/vol1/n8n/n8ndata/.n8n/config
› Error: There was an error: ENOENT: no such file or directory, mkdir
› ‘/mnt/gluster/vol1/n8n/n8ndata/.n8n’
2022-09-13T04:07:33.430Z | error | There was an error initializing DB: “EACCES: permission denied, mkdir ‘/mnt/gluster’” “{ file: ‘start.js’ }”
2022-09-13T04:07:33.431Z | info |
Stopping n8n… “{ file: ‘start.js’, function: ‘stopProcess’ }”


Installed glusterfs server + client + the docker plugin and when volume is created and mounted, files show up on all nodes–syncing just fine.

Tried giving 777 permission recursively but that didn’t change the result.

All inputs are much appreciated.

Hey @jay,

Is /mnt/gluster/ the path being mounted as a volume in the container?

Hey Jon.

Appreciate the quick response. And sorry I left a lot of details out.

Here’s the yaml file of n8n without postgres:


Summary

version: ‘3.9’

services:
n8n:
image: n8nio/n8n:latest
deploy:
mode: replicated
replicas: 1
placement:
max_replicas_per_node: 1
labels:
- “traefik.enable=true”
- “traefik.constraint-label=traefik-public”
- “traefik.http.services.n8n.loadbalancer.server.port=5678”
- “traefik.http.routers.n8n.rule=Host(my.example.domain.io)”
- “traefik.http.routers.n8n.tls=true”
- “traefik.http.middlewares.n8n.headers.SSLRedirect=true”
- “traefik.http.middlewares.n8n.headers.STSSeconds=315360000”
- “traefik.http.middlewares.n8n.headers.browserXSSFilter=true”
- “traefik.http.middlewares.n8n.headers.contentTypeNosniff=true”
- “traefik.http.middlewares.n8n.headers.forceSTSHeader=true”
- “traefik.http.middlewares.n8n.headers.SSLHost=domain.io
- “traefik.http.middlewares.n8n.headers.STSIncludeSubdomains=true”
- “traefik.http.middlewares.n8n.headers.STSPreload=true”
- “traefik.http.routers.n8n.middlewares=n8n@docker”
volumes:
# /etc/localtime:/etc/localtime:ro
- /var/run/docker.sock:/var/run/docker.sock
- n8ndata:/var/lib/n8ndata/data
- /mnt/gluster/vol1/n8n/n8ndata/local-files:/files #locally mounted /mnt/gluster
environment:
- DATA_FOLDER=/mnt/gluster/vol1/n8n/n8ndata
- N8N_USER_FOLDER=/mnt/gluster/vol1/n8n/n8ndata
- N8N_BASIC_AUTH_ACTIVE=true
- N8N_BASIC_AUTH_USER=n8n
- N8N_BASIC_AUTH_PASSWORD=n8n
- N8N_HOST=my.example.domain.io
- N8N_PORT=5678
- N8N_PROTOCOL=https
- NODE_ENV=production
- WEBHOOK_URL=https://my.example.domain.io/
- GENERIC_TIMEZONE=America/Someplace
- N8N_LOG_LEVEL=debug
networks:
- swarmnetwork
#command: /bin/sh -c “sleep 5; n8n start”

volumes:
n8ndata: #driver is glusterfs:latest (the plugin)
external: true

networks:
swarmnetwork: traefik network
external: true


/mnt/gluster is mounted locally with bricks in /mnt/gluster1/brick (first node), /mnt/gluster2/brick (second node) and so on…

df -Th output:


Summary

/dev/sdb1 xfs 29G 304M 29G 2% /gluster/bricks/2


volume was created with:


Summary

$ gluster volume create gfs replica 2 server1name:/mnt/gluster1/brick server2name:/mnt/gluster2/brick force


I just gave wordpress a try and that failed too so problem is with gluster but where it went wrong is a mystery.

It has been a few years since I used gluster but I am not convinced the volume is mounted correctly in the container.

It could be worth doing a smaller test and just mounting to a random folder in a contained instance and seeing if that works.

To your advise, I tried this yaml that I had skipped over in some gluster blog posts:


Summary

version: “3.4”

services:
foo:
image: alpine
command: ping localhost
networks:
- net
volumes:
- vol1:/tmp

networks:
net:
driver: overlay

volumes:
vol1:
driver: glusterfs
name: “gfs/vol1”


Turns out, the problem was with the gluster plugin. At least for the volume mounts.

Removed the plugin & instead of using the latest tag, used the 2.03 version. Now the volumes get mounted properly and it works fine with absolute paths commented out.

If I try to include those absolute paths (i.e. n8n_user_folder) it leads to the same error.

1 Like