Problem with file permissions in bind mounts (windows smb / mac OS / Docker Desktop)

I have set up a Webhook Workflow that grabs job information from our production workflow and converts this via html and gotenberg into pdf. This pdf file is placed in a smb mounted bind mount. The problem is, that apparently Docker Desktop is still somehow using these files so the i can’t delete them in macOS or in the windows evnironment. I already followed instructions of ChatGPT to set the mount in docker-compose.yml to “:delegate” oder “:cached” - this didn’t help me.

When i kill the open file connection on my windows server in computer management i can delete it.

Do you have any advice how to stop docker desktop from using these files? Or maybe i can set up a Node that deletes the files in this workflow after 60 seconds?

  • n8n version: 1.70.3
  • Database (default: SQLite): Postgres
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app): Docker Desktop on macOS
  • Operating system: macOS Monterey

I have now tried with an Execute Command node with the following command:

rm {{ $json.fileName }}

It seems even n8n itself can’t stop the running process of docker?

Command failed: rm /data/autoprint/RicohA4Schwarz/XYZ.pdf rm: can’t stat ‘/data/autoprint/RicohA4Schwarz/XYZ.pdf’: Permission denied

I have now been able to see on my Windows server that my macOS user is still accessing these files. When I search for the process in the terminal on macOS, Apple Virtualization Process is displayed. This indicates a problem with Docker Desktop, doesn’t it? I have connected the shares as a bind mount, is this more a case for a Docker volume? I can’t quite make sense of the Docker documentation.

hello @ToHo

can you share the worflow?

Here is the part with the writing process and my attempt to delete via the process itself. Currently i am running the workflow via ftp-upload and that works perfectly. But i want to understand why n8n or docker desktop is still using these files and how i can solve that for upcoming workflows.

I always thought that would be my biggest advantage of self hosting, that i can work on my system. But apparently this is not so easy.

How are you attaching that path?

It is mounted with bind mount in Docker Desktop / my Docker-compose.yml file.

can you provide the exact command of entire compose config (you can remove any sensitive info from it) ?

Here’s my docker-compose.yml content

volumes:
n8n_storage:
postgres_storage:
ollama_storage:
qdrant_storage:

networks:
demo:

x-n8n: &service-n8n
image: n8nio/n8n:latest
networks: [‘demo’]
environment:
- DB_TYPE=postgresdb
- DB_POSTGRESDB_HOST=postgres
- DB_POSTGRESDB_USER=${POSTGRES_USER}
- DB_POSTGRESDB_PASSWORD=${POSTGRES_PASSWORD}
- N8N_DIAGNOSTICS_ENABLED=false
- N8N_PERSONALIZATION_ENABLED=false
- N8N_ENCRYPTION_KEY
- N8N_USER_MANAGEMENT_JWT_SECRET
links:
- postgres

services:
postgres:
image: postgres:16-alpine
networks: [‘demo’]
restart: unless-stopped
environment:
- POSTGRES_USER
- POSTGRES_PASSWORD
- POSTGRES_DB
volumes:
- postgres_storage:/var/lib/postgresql/data
healthcheck:
test: [‘CMD-SHELL’, ‘pg_isready -h localhost -U ${POSTGRES_USER} -d ${POSTGRES_DB}’]
interval: 5s
timeout: 5s
retries: 10

n8n-import:
<<: *service-n8n
container_name: n8n-import
entrypoint: /bin/sh
command:
- “-c”
- “n8n import:credentials --separate --input=/backup/credentials && n8n import:workflow --separate --input=/backup/workflows”
volumes:
- ./n8n/backup:/backup
depends_on:
postgres:
condition: service_healthy

n8n:
<<: *service-n8n
container_name: n8n
restart: unless-stopped
ports:
- 5678:5678
environment:
- N8N_HOST=sub.mydomain.com
- N8N_PROTOCOL=http
- N8N_PORT=5678
- WEBHOOK_URL=https://sub.mydomain.com
- N8N_SKIP_WEBHOOK_SSL_VERIFICATION=true
- N8N_ENFORCE_SETTINGS_FILE_PERMISSIONS=true
- NODE_ENV=production
- N8N_TRUSTED_PROXY_ADDRESSES=true
- N8N_EXTERNALHOOKS_WAITING_TIMEOUT=5000
- DB_TYPE=postgresdb
- DB_POSTGRESDB_HOST=postgres
- DB_POSTGRESDB_USER=${POSTGRES_USER}
- DB_POSTGRESDB_PASSWORD=${POSTGRES_PASSWORD}
- N8N_DIAGNOSTICS_ENABLED=false
- N8N_PERSONALIZATION_ENABLED=false
- N8N_ENCRYPTION_KEY
- N8N_USER_MANAGEMENT_JWT_SECRET
- N8N_BASIC_AUTH_ACTIVE=true
- N8N_BASIC_AUTH_USER=user
- N8N_N8N_BASIC_AUTH_PASSWORD=xyz
- N8N_DEFAULT_BINARY_DATA_MODE=filesystem
- N8N_PUSH_BACKEND=websocket
volumes:
- n8n_storage:/home/node/.n8n
- ./n8n/backup:/backup
- ./shared:/data/shared
- /Users/n8n/Documents:/data/documents
- /Volumes/HotFolder:/data/hotfolder:cached
- /Volumes/01_folder:/data/01_folder:cached
- /Volumes/ftp_w2p:/data/ftp_w2p:cached
- /Volumes/Autoprint:/data/autoprint:cached
depends_on:
postgres:
condition: service_healthy
n8n-import:
condition: service_completed_successfully

qdrant:
image: qdrant/qdrant
container_name: qdrant
networks: [‘demo’]
restart: unless-stopped
ports:
- 6333:6333
volumes:
- qdrant_storage:/qdrant/storage

gotenberg:
image: gotenberg/gotenberg:8
container_name: gotenberg
networks: [‘demo’]
restart: unless-stopped
ports:
- “3000:3000”

I suppose you will need to mount a network path as a volume and then mount it to the docker.

Persistent storage in containers | Microsoft Learn

Another option is to create a volume that will be mapped to the network share:
docker volume create | Docker Docs

Example:
How to map LAN network share to Docker volume? - General - Docker Community Forums

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.