We all know it: n8n is your ticket to the world of workflow automation, but the installation can sometimes cause more headaches than assembling IKEA furniture without instructions. That’s why I’ve prepared the setup process in three flavors for you.
From the simple SQLite setup for your first steps to the full-grown PostgreSQL installation for serious business. The decision which one is right for you is like choosing ice cream – it depends on your appetite (and future plans).
Prerequisites – What You’ll Need
For all variants, you’ll need:
- A cloud server (e.g. Hetzner - Minimum CPX11, for productive environments better CPX21)
- Ubuntu 20.04 LTS or newer
- Docker and Docker Compose
- A domain with DNS configuration
Sounds like a lot? It’s not. The effort is worth it – promise.
Option 1: The SQLite Edition (For Beginners and Minimalists)
This variant is the VW Beetle among installations: Reliable, easy to maintain, but not a race car.
What’s Inside?
- n8n: Your workflow tool (in a container)
- Caddy: Provides HTTPS encryption (in a container)
- SQLite: Database directly in the n8n container
Here’s What the Directory Structure Looks Like
/opt/n8n/
├── data/ # Where your data lives
│ ├── caddy_config/ # Caddy needs this
│ │ └── Caddyfile # This directs where traffic goes
│ └── local_files/ # Files that n8n can use
└── docker-compose.yml # The heart of your installation
└── .env # Your environment variables
Let the Magic Begin
1. Create directories (copy, paste, done):
mkdir -p /opt/n8n/data/caddy_config
mkdir -p /opt/n8n/data/local_files
2. Create Docker volumes:
docker volume create caddy_data
docker volume create n8n_data
3. Create configuration files:
Now it gets exciting. Let’s first create the Docker Compose file:
services:
caddy:
container_name: caddy
image: caddy:latest
restart: unless-stopped
ports:
- "80:80"
- "443:443"
volumes:
- caddy_data:/data
- ${DATA_FOLDER}/caddy_config:/config
- ${DATA_FOLDER}/caddy_config/Caddyfile:/etc/caddy/Caddyfile
networks:
- n8n-network
n8n:
container_name: n8n
image: docker.n8n.io/n8nio/n8n
restart: always
ports:
- 5678:5678
environment:
- N8N_HOST=${SUBDOMAIN}.${DOMAIN_NAME}
- N8N_PORT=5678
- N8N_PROTOCOL=https
- NODE_ENV=production
- WEBHOOK_URL=https://${SUBDOMAIN}.${DOMAIN_NAME}/
- GENERIC_TIMEZONE=${GENERIC_TIMEZONE}
- N8N_BASIC_AUTH_ACTIVE=${N8N_BASIC_AUTH_ACTIVE}
- N8N_BASIC_AUTH_USER=${N8N_BASIC_AUTH_USER}
- N8N_BASIC_AUTH_PASSWORD=${N8N_BASIC_AUTH_PASSWORD}
- N8N_ENCRYPTION_KEY=${N8N_ENCRYPTION_KEY}
- N8N_ENFORCE_SETTINGS_FILE_PERMISSIONS=true
volumes:
- n8n_data:/home/node/.n8n
- ${DATA_FOLDER}/local_files:/files
networks:
- n8n-network
volumes:
caddy_data:
external: true
n8n_data:
external: true
networks:
n8n-network:
driver: bridge
The .env file for all the secret things:
DOMAIN_NAME=example.com # Your domain
SUBDOMAIN=n8n # Your subdomain for n8n
DATA_FOLDER=/opt/n8n/data # Where the data lives
GENERIC_TIMEZONE=Europe/Berlin # Timezone - because you don't want to live in GMT
N8N_BASIC_AUTH_ACTIVE=true # Access control - no party without invitation
N8N_BASIC_AUTH_USER=admin # Your username
N8N_BASIC_AUTH_PASSWORD=**** # Your password (please don't use something simple!)
N8N_ENCRYPTION_KEY=**** # Encryption key (at least 32 characters long)
The Caddyfile (it doesn’t get simpler):
n8n.example.com {
reverse_proxy n8n:5678 {
flush_interval -1
}
}
4. Start the machine:
cd /opt/n8n
docker compose up -d
Congratulations! You can now access n8n at https://n8n.example.com (or whatever your domain is).
Limitations of the SQLite Version
This version is like a compact car: Reliable for everyday use, but not built for high performance.
- Works well with max. 5,000-10,000 daily workflow executions
- Handles 10-15 simultaneous workflows
- Database should stay under 5 GB
- SQLite doesn’t like concurrent write operations (like most of us)
Option 2: The PostgreSQL Edition (For Growing Businesses)
You’ve grown and need more power? Time for PostgreSQL. This variant is like an SUV – more power, more space, more possibilities.
What’s New?
Instead of SQLite, PostgreSQL now comes into play – a proper, full-grown database that can grow with you.
How to Get It Running:
1. Create directories and volumes:
mkdir -p /opt/n8n/data/caddy_config
mkdir -p /opt/n8n/data/local_files
cd /opt/n8n
docker volume create caddy_data
docker volume create n8n_data
docker volume create postgres_data
2. The Docker Compose file:
services:
caddy:
container_name: caddy
image: caddy:latest
restart: unless-stopped
ports:
- "80:80"
- "443:443"
volumes:
- caddy_data:/data
- ${DATA_FOLDER}/caddy_config:/config
- ${DATA_FOLDER}/caddy_config/Caddyfile:/etc/caddy/Caddyfile
networks:
- n8n-network
n8n:
container_name: n8n
image: docker.n8n.io/n8nio/n8n
restart: always
ports:
- 5678:5678
environment:
- N8N_HOST=${SUBDOMAIN}.${DOMAIN_NAME}
- N8N_PORT=5678
- N8N_PROTOCOL=https
- NODE_ENV=production
- WEBHOOK_URL=https://${SUBDOMAIN}.${DOMAIN_NAME}/
- GENERIC_TIMEZONE=${GENERIC_TIMEZONE}
- N8N_BASIC_AUTH_ACTIVE=${N8N_BASIC_AUTH_ACTIVE}
- N8N_BASIC_AUTH_USER=${N8N_BASIC_AUTH_USER}
- N8N_BASIC_AUTH_PASSWORD=${N8N_BASIC_AUTH_PASSWORD}
- N8N_ENCRYPTION_KEY=${N8N_ENCRYPTION_KEY}
- N8N_ENFORCE_SETTINGS_FILE_PERMISSIONS=true
# PostgreSQL configuration
- DB_TYPE=postgresdb
- DB_POSTGRESDB_HOST=postgres
- DB_POSTGRESDB_PORT=5432
- DB_POSTGRESDB_DATABASE=${DB_POSTGRESDB_DATABASE}
- DB_POSTGRESDB_USER=${DB_POSTGRESDB_USER}
- DB_POSTGRESDB_PASSWORD=${DB_POSTGRESDB_PASSWORD}
volumes:
- n8n_data:/home/node/.n8n
- ${DATA_FOLDER}/local_files:/files
depends_on:
- postgres
networks:
- n8n-network
postgres:
container_name: postgres
image: postgres:13
restart: always
environment:
- POSTGRES_USER=${DB_POSTGRESDB_USER}
- POSTGRES_PASSWORD=${DB_POSTGRESDB_PASSWORD}
- POSTGRES_DB=${DB_POSTGRESDB_DATABASE}
volumes:
- postgres_data:/var/lib/postgresql/data
networks:
- n8n-network
volumes:
caddy_data:
external: true
n8n_data:
external: true
postgres_data:
external: true
networks:
n8n-network:
driver: bridge
3. Extend .env file:
DOMAIN_NAME=example.com
SUBDOMAIN=n8n
DATA_FOLDER=/opt/n8n/data
GENERIC_TIMEZONE=Europe/Berlin
# Authentication
N8N_BASIC_AUTH_ACTIVE=true
N8N_BASIC_AUTH_USER=admin
N8N_BASIC_AUTH_PASSWORD=**** # Choose a secure password!
N8N_ENCRYPTION_KEY=**** # At least 32 characters, complex and unique
# PostgreSQL configuration
DB_POSTGRESDB_USER=n8n
DB_POSTGRESDB_PASSWORD=**** # Separate DB password, also choose securely!
DB_POSTGRESDB_DATABASE=n8n
4. Caddyfile remains the same:
n8n.example.com {
reverse_proxy n8n:5678 {
flush_interval -1
}
}
5. Start containers:
docker compose up -d
Benefits of the PostgreSQL Edition:
- Better support for concurrent access
- Higher reliability with extensive data volumes
- Better performance with high workloads
- Support for team functions (multiple users)
- Processes up to 50,000 daily workflows
Option 3: The Supabase Edition (For Outsourcing Enthusiasts)
Don’t want to manage the database yourself? Supabase does it for you. This is like a chauffeur service – you enjoy the ride without having to take the wheel.
Preparatory Steps:
1. Set up Supabase account and create project
- Register at Supabase and create a new project
- When creating your first project, you’ll need to set a database password – remember it well, you’ll need it later!
- After project creation, you’ll need two important pieces of information:
- The Connection String in Session Pooler format
- The SSL certificate
- How to find the Connection String:
- Click on “Connect” in the top menu bar
- In the popup that opens, scroll down to the “Session Pooler” section
- Copy the displayed Connection String (looks something like:
postgresql://postgres.jhauaeagtyyroyjvtzsh:[YOUR-PASSWORD]@aws-0-eu-central-1.pooler.supabase.com:5432/postgres
)
- How to find the SSL certificate:
- Go to “Project Settings”
- Then to “Configuration” → “Database”
- Under “SSL Configuration” you can download the SSL certificate
2. Create directory structure on the server:
mkdir -p /opt/n8n/data/caddy_config
mkdir -p /opt/n8n/data/certs
mkdir -p /opt/n8n/data/local_files
cd /opt/n8n
3. Transfer SSL certificate to the server:
The downloaded SSL certificate must be transferred to your server in the directory /opt/n8n/data/certs/
.
4. Create Docker volumes:
docker volume create caddy_data
docker volume create n8n_data
Configuration Files:
1. Analyze the Connection String:
Let’s take the example string: postgresql://postgres.jhauaeagtyyroyjvtzsh:[YOUR-PASSWORD]@aws-0-eu-central-1.pooler.supabase.com:5432/postgres
Extract the following values:
- User:
postgres.jhauaeagtyyroyjvtzsh
(everything between://
and:
) - Password: The password you set during project creation
- Host:
aws-0-eu-central-1.pooler.supabase.com
(everything between@
and:5432
) - Port:
5432
- Database:
postgres
(everything after the last/
)
2. Create .env file:
# Domain and paths
DOMAIN_NAME=example.com
SUBDOMAIN=n8n
DATA_FOLDER=/opt/n8n/data
GENERIC_TIMEZONE=Europe/Berlin
# Authentication
N8N_BASIC_AUTH_ACTIVE=true
N8N_BASIC_AUTH_USER=admin
N8N_BASIC_AUTH_PASSWORD=**** # Choose a secure password!
N8N_ENCRYPTION_KEY=**** # At least 32 characters, secure and unique
# Supabase PostgreSQL configuration
DB_POSTGRESDB_DATABASE=postgres
DB_POSTGRESDB_HOST=aws-0-eu-central-1.pooler.supabase.com
DB_POSTGRESDB_PORT=5432
DB_POSTGRESDB_USER=postgres.jhauaeagtyyroyjvtzsh # Your Supabase project ID
DB_POSTGRESDB_PASSWORD=**** # The password you set during project creation
DB_POSTGRESDB_SCHEMA=public
3. Create Docker Compose file:
services:
caddy:
container_name: caddy
image: caddy:latest
restart: unless-stopped
ports:
- "80:80"
- "443:443"
volumes:
- caddy_data:/data
- ${DATA_FOLDER}/caddy_config:/config
- ${DATA_FOLDER}/caddy_config/Caddyfile:/etc/caddy/Caddyfile
networks:
- n8n-network
n8n:
container_name: n8n
image: docker.n8n.io/n8nio/n8n
restart: always
ports:
- 5678:5678
environment:
- N8N_HOST=${SUBDOMAIN}.${DOMAIN_NAME}
- N8N_PORT=5678
- N8N_PROTOCOL=https
- NODE_ENV=production
- WEBHOOK_URL=https://${SUBDOMAIN}.${DOMAIN_NAME}/
- GENERIC_TIMEZONE=${GENERIC_TIMEZONE}
- N8N_BASIC_AUTH_ACTIVE=${N8N_BASIC_AUTH_ACTIVE}
- N8N_BASIC_AUTH_USER=${N8N_BASIC_AUTH_USER}
- N8N_BASIC_AUTH_PASSWORD=${N8N_BASIC_AUTH_PASSWORD}
- N8N_ENCRYPTION_KEY=${N8N_ENCRYPTION_KEY}
- N8N_ENFORCE_SETTINGS_FILE_PERMISSIONS=true
# Supabase PostgreSQL configuration
- DB_TYPE=postgresdb
- DB_POSTGRESDB_DATABASE=${DB_POSTGRESDB_DATABASE}
- DB_POSTGRESDB_HOST=${DB_POSTGRESDB_HOST}
- DB_POSTGRESDB_PORT=${DB_POSTGRESDB_PORT}
- DB_POSTGRESDB_USER=${DB_POSTGRESDB_USER}
- DB_POSTGRESDB_PASSWORD=${DB_POSTGRESDB_PASSWORD}
- DB_POSTGRESDB_SCHEMA=${DB_POSTGRESDB_SCHEMA}
- DB_POSTGRESDB_SSL_REJECT_UNAUTHORIZED=true
volumes:
- n8n_data:/home/node/.n8n
- ${DATA_FOLDER}/local_files:/files
- ${DATA_FOLDER}/certs:/opt/custom-certificates
networks:
- n8n-network
volumes:
caddy_data:
external: true
n8n_data:
external: true
networks:
n8n-network:
driver: bridge
4. Create Caddyfile:
n8n.example.com {
reverse_proxy n8n:5678 {
flush_interval -1
}
}
5. Start containers:
docker compose up -d
6. Check logs:
docker compose logs -f
Watch for error messages regarding the database connection. If everything is configured correctly, n8n should start successfully and connect to the Supabase database.
IMPORTANT: Enable automatic backups directly in your Supabase dashboard under “Project Settings” → “Database” → “Backups”!
Which Version Is Right for You?
Option | For whom? | Workload | Benefits | Drawbacks |
---|---|---|---|---|
SQLite | Beginners, small businesses | Up to 10k workflows/day | Simple, quick setup | Limited scalability |
PostgreSQL | Growing businesses | Up to 50k workflows/day | More reliable for high load, team-capable | Higher resource requirements |
Supabase | Companies without DB expertise | Up to 50k workflows/day | No DB maintenance needed, managed service | External dependency, costs |
Maintenance: Updates and Care
Update containers (for all variants):
cd /opt/n8n
docker compose pull
docker compose down
docker compose up -d
Monitor logs:
# All container logs
docker compose logs
# Only n8n logs
docker compose logs n8n
# Only Caddy logs
docker compose logs caddy
Troubleshooting: The Usual Suspects
Container doesn’t start:
docker compose logs
Encryption key problems:
If you see errors like “Mismatching encryption keys”:
docker compose down
docker volume rm n8n_data
docker volume create n8n_data
docker compose up -d
CAUTION: This deletes your configuration! You should export workflows beforehand.
Permission problems:
If permission warnings appear for configuration files, check if your environment variables include:
N8N_ENFORCE_SETTINGS_FILE_PERMISSIONS=true