Persistent Storage Requirements

Describe the problem/error/question

I’m deploying the n8n app via kubernetes and am unsure whether persistent storage is required.

We have a 1 instance of the 8n main app deployed connected to an external postgres db.

From the docs and ( Do I need persistent storage? ), I read that there is no need for persistent storage if we use an external database. Can this be confirmed?

Let’s say I start scaling and add workers. Do they need persistent storage? The required settings (enc key, etc) are set via environment variables.

If they do require persistent storage, does it have to be shared amongst the main and workers or can they have their own storage?

Just trying to wrap my head around this part before fully deploying and onboarding users.

Information on your n8n setup

  • n8n version: Version 1.122.4
  • Database (default: SQLite): GCP Postgres
  • n8n EXECUTIONS_PROCESS setting (default: own, main): one main instance for now
  • Running n8n via (Docker, npm, n8n cloud, desktop app): kubernetes
  • Operating system:

Hi @Ron_Ballesteros,

I would highly recommend always using persistent storage of your n8n data folder. This is especially important to share between your main and worker containers when using queue mode. In version n8n version 2 you there is a new requirement to also setup the task runners for code blocks and I believe this also uses the shared data dir (I might be wrong).

Here is an example docker compose using version 2 on queue mode

services:
  n8n-db:
    image: postgres:16.1
    restart: always
    environment:
      - POSTGRES_DB=n8n
      - POSTGRES_PASSWORD=n8n
      - POSTGRES_USER=n8n
    volumes:
      - postgres-data:/var/lib/postgresql/data

  n8n-redis:
    image: redis:7-alpine
    restart: always
    volumes:
      - redis-data:/data

  n8n-main:
    image: n8nio/n8n
    restart: always
    depends_on:
      - n8n-db
      - n8n-redis
    volumes:
      - n8n-data:/home/node/.n8n
    ports:
      - 4567:5678
      - 5680:5680
    environment:
      - WEBHOOK_URL=http://localhost:4567
      - NODE_ENV=production
      - N8N_HOST=localhost
      - N8N_PORT=5678
      - N8N_PROTOCOL=https
      - N8N_SECURE_COOKIE=true
      - EXECUTIONS_MODE=queue
      # Task runner configuration for v2 (external mode)
      - N8N_RUNNERS_ENABLED=true
      - N8N_RUNNERS_MODE=external
      - N8N_RUNNERS_BROKER_LISTEN_ADDRESS=0.0.0.0
      - N8N_RUNNERS_AUTH_TOKEN=your-secure-auth-token-change-this
      # Security settings
      - N8N_ENFORCE_SETTINGS_FILE_PERMISSIONS=false
      - N8N_BLOCK_ENV_ACCESS_IN_NODE=true
      - N8N_SKIP_AUTH_ON_OAUTH_CALLBACK=false
      # File access restriction
      - N8N_RESTRICT_FILE_ACCESS_TO=/home/node/.n8n-files
      # Binary data configuration (filesystem mode for regular mode)
      - N8N_DEFAULT_BINARY_DATA_MODE=filesystem
      - NODE_FUNCTION_ALLOW_BUILTIN=crypto
      - OFFLOAD_MANUAL_EXECUTIONS_TO_WORKERS=true
      - DB_TYPE=postgresdb
      - DB_POSTGRESDB_DATABASE=n8n
      - DB_POSTGRESDB_HOST=n8n-db
      - DB_POSTGRESDB_PORT=5432
      - DB_POSTGRESDB_USER=n8n
      - DB_POSTGRESDB_SCHEMA=n8n
      - DB_POSTGRESDB_PASSWORD=n8n
      - DB_POSTGRESDB_POOL_SIZE=40
      - DB_POSTGRESDB_CONNECTION_TIMEOUT=30000
      # Queue mode configuration
      - QUEUE_BULL_REDIS_HOST=n8n-redis
      - QUEUE_BULL_REDIS_PORT=6379
      - QUEUE_BULL_REDIS_DB=0
      #  - N8N_LOG_LEVEL=debug
      - NODES_EXCLUDE="[n8n-nodes-base.localFileTrigger]"

  n8n-worker:
    image: n8nio/n8n
    restart: always
    command: worker --concurrency=6
    depends_on:
      - n8n-db
      - n8n-redis
      - n8n-worker-task-runner
    volumes:
      - n8n-data:/home/node/.n8n
    environment:
      - EXECUTIONS_MODE=queue
      - WEBHOOK_URL=http://localhost:4567
      - N8N_HOST=localhost
      - N8N_SKIP_DB_INIT=true
      # Task runner configuration for v2 (external mode)
      - N8N_RUNNERS_ENABLED=true
      - N8N_RUNNERS_MODE=external
      - N8N_RUNNERS_BROKER_LISTEN_ADDRESS=0.0.0.0
      - N8N_RUNNERS_AUTH_TOKEN=your-secure-auth-token-change-this
      - N8N_PROCESS=worker
      # Security settings
      - N8N_ENFORCE_SETTINGS_FILE_PERMISSIONS=false
      - N8N_BLOCK_ENV_ACCESS_IN_NODE=true
      # File access restriction
      - N8N_RESTRICT_FILE_ACCESS_TO=/home/node/.n8n-files
      - NODE_FUNCTION_ALLOW_BUILTIN=crypto
      - DB_TYPE=postgresdb
      - DB_POSTGRESDB_DATABASE=n8n
      - DB_POSTGRESDB_HOST=n8n-db
      - DB_POSTGRESDB_PORT=5432
      - DB_POSTGRESDB_USER=n8n
      - DB_POSTGRESDB_SCHEMA=n8n
      - DB_POSTGRESDB_PASSWORD=n8n
      - DB_POSTGRESDB_POOL_SIZE=40
      - DB_POSTGRESDB_CONNECTION_TIMEOUT=30000
      # Queue mode configuration
      - QUEUE_BULL_REDIS_HOST=n8n-redis
      - QUEUE_BULL_REDIS_PORT=6379
      - QUEUE_BULL_REDIS_DB=0
      # - N8N_LOG_LEVEL=debug
      - NODES_EXCLUDE="[n8n-nodes-base.localFileTrigger]"

  # Task runner for n8n-worker with Python support for v2
  n8n-worker-task-runner:
    image: n8nio/runners
    restart: always
    depends_on:
      - n8n-db
      - n8n-redis
    environment:
      # Task runner configuration
      - N8N_RUNNERS_MODE=external
      - N8N_RUNNERS_TASK_BROKER_URI=http://n8n-worker:5679
      - N8N_RUNNERS_AUTH_TOKEN=your-secure-auth-token-change-this
      # Enable Python and JavaScript support
      - N8N_RUNNERS_ENABLED_TASK_TYPES=javascript,python
      # Auto shutdown after 15 seconds of inactivity
      - N8N_RUNNERS_AUTO_SHUTDOWN_TIMEOUT=15
    volumes:
      # Shared volume for file access if needed
      - n8n-data:/home/node/.n8n

volumes:
  postgres-data:
  redis-data:
  n8n-data:

Fact checking myself now, it doesnt look like the workers need to have the data folder shared from the main instance. I’ll have to update my config and test it out

Hi @Wouter_Nigrini - thanks for the insight! So from your last statement the workers don’t need to share the external storage as the main instance?

I see support for aws s3 buckets, but being I’m on GCP, I wonder if gcs buckets would be supported. This would make it a lot easier for a lot of k8s deployments IMO.

I checked now. As a bare minimum you dont seem to need it. HOWEVER, if you’re planning on installing community nodes, you’ll need to share the data directory between main and worker. I cant confirm this at the moment.

Thanks for the input. I think I was able to make this work in a GKE cluster…

  • GCP postgres
  • GCP external memory store
  • GCS Bucket (shared with main instance and workers)
  • main instance (deployment resource with persistent storage)
  • workers (deployments with gcs bucket storage)

Everything seems to be working ok…

That is great news!

hello @Ron_Ballesteros and @Wouter_Nigrini

I am trying to get into the same boat to deploy n8n docker [n8n-main + n8n-runner ONLY] on local machine setup to a kubernetes cluster. I would be grateful to get some insights on the following:

  1. In the sample docker-compose shown: I see n8n-main, n8n-worker and n8n-runner. How is the n8n-worker different from n8n-main? Also how is this setup different from one without worker service? Is this something to do with queue mode?

  2. I am trying to deploy the setup to kubernetes. I am not running the setup in Queue Mode either at this point. just need to get the existing docker setup running on Kubernetes. will PVC (AWS) be required in this case according your opinion?

Hope to hear back! Thank you :slight_smile:

Hi @adarsh-lm,

  1. Yes the worker is only needed if you’re planning to run in queue mode. This does have a performance advantage. See youtube video below of comparison
  2. If you’re asking whether you need a VPC for EKS (Aws’s Kubernetes services), then I believe yes you will require a VPC

@Wouter_Nigrini Not VPC but PVC (PersistentVolumeClaim) via AWS EBS.

I am reading about using kubernetes disk space to temporarily serve for filesystem binary_data_mode

can I functionally be okay trading off for losing memory for pod restart/crashes if I do not have persistent memory at this point?

It is a PoC, so want to avoid costs

Hey there…

I am trying to deploy the setup to kubernetes. I am not running the setup in Queue Mode either at this point. just need to get the existing docker setup running on Kubernetes. will PVC (AWS) be required in this case according your opinion?

You can use the community helm chart (or build your own) but the chart is a solid starting point.

At a minimum, you can run n8n as a single instance without enabling queue mode. In this setup, the main pod handles the UI/API, receives webhooks, and executes workflows itself with no separate worker deployment required.

Persistence wise, the primary source of truth is Postgres. As long as you’re using an external, persistent Postgres instance, your workflows, executions, and credentials are stored there and won’t be lost across pod restarts.

A PV for n8n itself is optional if you don’t rely on local filesystem storage. However, if you plan to store binary data locally, you should either attach a small PV or configure object storage (e.g., GCS/S3) for binary data persistence. A small PV is typically inexpensive and avoids issues with file-based features later on.

2 Likes

I’ll check on the helm chart

currently we are using separate dockerfiles and manifests building on top of the images created from it