Hi everyone,
I’m reporting a persistent crash loop with my self-hosted n8n Docker instance. The evidence strongly suggests that the n8n application is ignoring the N8N_TRUST_PROXY environment variable, leading to a fatal ValidationError.
I have found a temporary workaround at the reverse proxy level that confirms the issue lies within how n8n is processing proxy headers.
The “Smoking Gun” - Key Finding
The crashing is triggered by the X-Forwarded-For header being sent from my reverse proxy (Caddy). The crash is completely resolved if I configure Caddy to remove this header before forwarding the request to n8n.
-
Failing Caddyfile (Causes Crash):
codeCode
n8n.mydomain.com { reverse_proxy localhost:5678 } -
Working Caddyfile (Workaround):
codeCode
n8n.mydomain.com { reverse_proxy localhost:5678 { header_up -X-Forwarded-For } } ```This confirms that the header is the trigger and that the n8n process is incorrectly handling it, despite being configured to trust the proxy.
Environment Details
-
n8n Version: Started on 1.115.3, then upgraded to latest (as of Oct 20, 2025) with no change in behavior.
-
Database: Default SQLite.
-
Running n8n via: Docker Compose.
-
Operating System: Linux (Debian/Ubuntu) on a Google Compute Engine VM.
-
Reverse Proxy: Caddy.
The Error Log
The container crashes immediately upon receiving a web request, showing this repeated error:
codeLog
ValidationError: The 'X-Forwarded-For' header is set but the Express 'trust proxy' setting is false (default). This could indicate a misconfiguration which would prevent express-rate-limit from accurately identifying users. See https://express-rate-limit.github.io/ERR_ERL_UNEXPECTED_X_FORWARDED_FOR/ for more information.
at Object.xForwardedForHeader (...)
...
Detailed Troubleshooting Steps Performed
The core issue is a direct contradiction: the application claims ‘trust proxy’ is false, even when the environment variable is proven to be correctly set.
-
Set N8N_TRUST_PROXY: Added - N8N_TRUST_PROXY=true and later - N8N_TRUST_PROXY=‘127.0.0.1’ to the docker-compose.yml environment section. This had no effect.
-
Forced Container Re-creation: Consistently used sudo docker-compose down followed by sudo docker-compose up -d --force-recreate to ensure new settings were being applied. The error always persisted.
-
Verified Environment Variable Inside Container: I connected to the running container and confirmed the variable was correctly set:
-
sudo docker exec <container_id> printenv | grep N8N_TRUST_PROXY
-
Result: N8N_TRUST_PROXY=‘127.0.0.1’
-
This is the critical evidence: Docker is setting the variable correctly, but the n8n process is ignoring it.
-
-
Reset Persistent Configuration: To rule out a “stuck” config file, I stopped the container and renamed the persistent /home/node/.n8n/config file. On restart, n8n generated a new config file, but the crash immediately returned.
-
Upgraded n8n Version: Changed the image tag in docker-compose.yml to n8nio/n8n:latest and force-recreated the container. The exact same error occurred on the latest version.
Final docker-compose.yml
This is the configuration used during the final troubleshooting steps.
codeYaml
version: '3.8'
services:
n8n:
image: n8nio/n8n:latest
ports:
- "5678:5678"
environment:
- N8N_TRUST_PROXY='127.0.0.1'
- N8N_RATE_LIMIT_ENABLED=false # This was also ignored
- DB_SQLITE_POOL_SIZE=10
- N8N_RUNNERS_ENABLED=true
- N8N_BLOCK_ENV_ACCESS_IN_NODE=false
- N8N_GIT_NODE_DISABLE_BARE_REPOS=true
- N8N_HOST=n8n.mydomain.com
- N8N_PORT=5678
- N8N_PROTOCOL=https
- NODE_ENV=production
- WEBHOOK_URL=https://n8n.mydomain.com/
volumes:
- ./n8n_data:/home/node/.n8n
restart: unless-stopped
Given that the environment variables are confirmed to be set correctly inside the container, but the application behaves as if they are not, this appears to be a bug in how n8n is ingesting its configuration at runtime.
Thanks for taking a look.