Is it necessary to mount in volumes n8n user config folder in queue mode + postgres DB?

Hi!

We have a Docker Swarm with n8n running in queue mode. We followed the Docker Compose file provided in the official docs and mixed it with the queue mode config + redis + Postgres DB:

version: '3.7'

x-common-env: &common-env
  NODE_ENV: production
  N8N_ENCRYPTION_KEY: ${N8N__ENCRYPTION_KEY}
  EXECUTIONS_MODE: queue
  QUEUE_BULL_REDIS_HOST: redis
  DB_TYPE: postgresdb
  # … (redacted for brevity)

services:
  n8n-main:
    image: n8nio/n8n:1.35.0
    environment:
      <<: *common-env
    volumes:
      - n8n_data:/home/node/.n8n
      # The following server folder allow us to download a CSV file (for instance) from a worker node, and deal with it a different worker.
      # It also allows us to share files between deployments and simplify the access to the folder without entering Docker container for debug purposes.
      - /home/${our_user}/n8n_workflow_executions-shared_files:/home/node/n8n_workflow_executions-shared_files
    deploy:
      replicas: 1
      # … (redacted for brevity)
    healthcheck:
      # … (redacted for brevity)

  n8n-worker:
    image: n8nio/n8n:1.35.0
    command: worker
    environment:
      <<: *common-env
      # Specific configuration for workers
      QUEUE_HEALTH_CHECK_ACTIVE: "true"
      QUEUE_HEALTH_CHECK_PORT: 5678
    volumes:
      # Same purpose as with the n8n-main
      - /home/${our_user}/n8n_workflow_executions-shared_files:/home/node/n8n_workflow_executions-shared_files
    deploy:
      replicas: ${N8N__WORKER_INSTANCES}
      # … (redacted for brevity)
    healthcheck:
      # … (redacted for brevity)

  redis:
    image: "redis:7.2.3"
    deploy:
      replicas: 1
      # … (redacted for brevity)
    healthcheck:
      # … (redacted for brevity)

volumes:
  n8n_data:
    external: true

However, we have some doubts:

  1. Is it really necessary to declare the n8n_data, mapped to the user config directory /home/node/.n8n, as a volume in order to persist that information from the main node given that:
    • We are setting DB_TYPE: postgresdb in order to store all the executions and workflows data in an external DB instead of a local SQLite
    • We are configuring the encryption key using the N8N_ENCRYPTION_KEY environment variable
    • We are using the binding volume maped to the server folder /home/${our_user}/n8n_workflow_executions-shared_files in order to share binary files between executions and nodes instead of the binaryData contained in the .n8n config folder
  2. If it is necessary, it is also necessary for the worker nodes? If so, should it be shared as in this other official example?
  3. Why is it needed to be declared as externally managed?

Thanks!

Information on your n8n setup

  • n8n version: 1.35.0
  • Database: PostgreSQL
  • n8n EXECUTIONS_MODE: setting: queue
  • Running n8n via: Docker Swarm
  • Operating system: Ubuntu

It looks like your topic is missing some important information. Could you provide the following if applicable.

  • n8n version:
  • Database (default: SQLite):
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app):
  • Operating system:

Hey @JavierCane,

The .n8n folder also keeps the logs so I would keep it persisted where possible so you can keep them between updates just incase anything goes wrong. It doens’t need to be externally managed our guide is just a guide and you are free to change it as you want to.