Execute Node never end

My Demographic installation

  • installed with Cloudron
  • Docker
  • n8n Version: 0.130.0

My issue

I’m following the {course level one)[1. Getting Data From the Data Warehouse | Docs],
What you see in the image at the…

  • LEFT: When I click “Execute node” the whole Workflow start, and it never ends (I let it roll for more than 10 minutes).
  • RIGHT: But if I quit the node and stop manually the workflow, than go back in the node all the data is there

My expectation

Has explained in this video tutorial I expect:

  • when I click Execute Node, only the node will be executed and not the whole workflow

and if I continue and add the AirTable node, which also never end
but worst I can’t even stop the workflow, (unless I stop the container (docker))
When I try to stop the workflow it say:

Problem stopping execution

There was a problem stopping the execuction:
The execution id “9” could not be found.

Hey @JOduMonT!

Did you follow the steps mentioned here when setting up n8n? It seems to be an issue with your n8n deployment.

@harshil1712; I opened an issue on Cloudron side to verify: n8n: execute-node-never-end | Cloudron Forum

Thank you @JOduMonT for creating a post on Cloudron. I checked your post and you have mentioned that there are some environment variables missing. Can you please share which variables are missing? This will help us know more about the issue.

Hi :wave: I packaged the app for Cloudron so I can be of assistance what is default in the .env is or what the default config is.

This sample.env is the default that gets copied and used if none exists.

# Set the logging level to 'debug'
export EXECUTIONS_DATA_SAVE_MANUAL_EXECUTIONS=true
export EXECUTIONS_DATA_SAVE_ON_ERROR=all
export EXECUTIONS_DATA_SAVE_ON_SUCCESS=all
export N8N_LOG_LEVEL=info

In addition to the sample.env initial state the start.sh aka. docker-entrypoint does add this to the .env file / the n8n config.

#!/bin/bash

set -eu

echo "=> Generating nginx.conf"
sed -e "s,##HOSTNAME##,${CLOUDRON_APP_DOMAIN}," \
    /app/pkg/nginx.conf  > /run/nginx.conf

echo "=> Ensure directories"
mkdir -p /run/nginx /app/data/.cache /app/data/.n8n /app/data/custom /app/data/output /app/data/root

if [[ ! -f "/app/data/.env" ]]; then
  cp -r /app/code/sample.env /app/data/.env
fi

if [[ -f "/app/data/.env" ]]; then
    export $(egrep -v '^#' /app/data/.env | xargs) &> /dev/null
fi

CONFIG_FILE="/app/data/.n8n/app-config.json"

if [[ ! -f $CONFIG_FILE ]]; then
  echo "=> Creating config file"
  echo "{}" > $CONFIG_FILE
fi

echo "=> Loading configuration"
export VUE_APP_URL_BASE_API="${CLOUDRON_APP_ORIGIN}/"
export WEBHOOK_TUNNEL_URL="${CLOUDRON_APP_ORIGIN}/"

cat $CONFIG_FILE | \
jq '.database.type="postgresdb"' | \
jq '.database.postgresdb.host=env.CLOUDRON_POSTGRESQL_HOST' | \
jq '.database.postgresdb.port=env.CLOUDRON_POSTGRESQL_PORT' | \
jq '.database.postgresdb.user=env.CLOUDRON_POSTGRESQL_USERNAME' | \
jq '.database.postgresdb.password=env.CLOUDRON_POSTGRESQL_PASSWORD' | \
jq '.database.postgresdb.database=env.CLOUDRON_POSTGRESQL_DATABASE' \
> /tmp/app-config.json && mv /tmp/app-config.json $CONFIG_FILE

echo "=> Setting permissions"
chown -R cloudron:cloudron /run /app/data

echo "=> Starting N8N"
exec /usr/bin/supervisord --configuration /etc/supervisor/supervisord.conf --nodaemon -i N8N

So we use a mixture of both env and the conifg file.

Also by default in the Dockerfile we define these env vars global.

ENV N8N_CUSTOM_EXTENSIONS="/app/data/custom" \
    N8N_USER_FOLDER="/app/data" \
    N8N_CONFIG_FILES="/app/data/.n8n/app-config.json" \
    N8N_LOG_OUTPUT="console"

EDIT:

For debugging if anyone want to have look at it you can visit my.demo.cloudron.io with username and password cloudron also for the proxy auth for Login to n8n

2 Likes

Thanks for sharing with the community @BrutalBirdie

1 Like

Anything obvious from the default vars why this issue would occur?

It is possible it’s related to NGINX ?
I mean, maybe NGINX cut or hold the connection, or cache it, …
n8n suggest having this configuration

server {
   listen 443 ssl;
   listen [::]:443 ssl;
}
server_name cloud.example.com;
location / {
   proxy_pass http://localhost:5678;
   proxy_set_header Connection '';
   proxy_http_version 1.1;
   chunked_transfer_encoding off;
   proxy_buffering off;
   proxy_cache off;
}
1 Like

@BrutalBirdie ; I built a docker at home with linuxserver/swag as nginx proxy and put n8n behind.
Then from n8n located on my Cloudron I exported my workflow and imported in my n8n at home and when I execute the workflow work and stop by itself.
So definitely the issue is on your side.

my config

.env for docker-compose

#N8N_BASIC_AUTH_ACTIVE=true
#N8N_BASIC_AUTH_USER
#N8N_BASIC_AUTH_PASSWORD
N8N_HOST=sub.domain.tld
N8N_PORT=5678
N8N_PROTOCOL=https
NODE_ENV=production
WEBHOOK_TUNNEL_URL=https://sub.domain.tld/
GENERIC_TIMEZONE=Asia/Bangkok

n8n.yml (aka docker-compose.yml)

version: "3"
services:
  n8n:
    image: n8nio/n8n
    container_name: n8n
    env_file:
    - $ENVFILE/n8n
    volumes:
    - $VOLUMES/config/n8n:/home/node/.n8n
    restart: $RESTART
    ports:
    - 5678:5678

nginx reverse-proxy config (nginx/proxy-confs/n8n.subdomain.conf)

server {
    listen 443 ssl;
    listen [::]:443 ssl;

    server_name sub.*;

    include /config/nginx/ssl.conf;
    client_max_body_size 0;
    location / {
        include /config/nginx/proxy.conf;
        include /config/nginx/resolver.conf;
        proxy_pass http://n8n:5678;
    }
}

/config/nginx/proxy.conf;

# Timeout if the real server is dead
proxy_next_upstream error timeout invalid_header http_500 http_502 http_503;

# Proxy Connection Settings
proxy_buffers 32 4k;
proxy_connect_timeout 240;
proxy_headers_hash_bucket_size 128;
proxy_headers_hash_max_size 1024;
proxy_http_version 1.1;
proxy_read_timeout 240;
proxy_redirect http:// $scheme://;
proxy_send_timeout 240;
# Proxy Cache and Cookie Settings
proxy_cache_bypass $cookie_session;
#proxy_cookie_path / "/; Secure"; # enable at your own risk, may break certain apps
proxy_no_cache $cookie_session;

# Proxy Header Settings
proxy_set_header Connection $connection_upgrade;
proxy_set_header Early-Data $ssl_early_data;
proxy_set_header Host $host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header X-Forwarded-Ssl on;
proxy_set_header X-Real-IP $remote_addr;

As I mentioned on the Cloudron Forum, it could be linked to IPTables, Cloudron have rules to throttle connection (setup/start/cloudron-firewall.sh · master · cloudron / box · GitLab)
I think we could safely close this issue/question and I’ll continue with Cloudron Team.
thanks

I have been to “screw”/reproduce the issue on my nonCloudron instance.
As soon I use Postgres as database, I have the same behavior

here my new env file, everything else remains the same (linuxserver/swag + same nginx reverse-proxy config)

N8N_BASIC_AUTH_ACTIVE='true'
N8N_BASIC_AUTH_USER='user'
N8N_BASIC_AUTH_PASSWORD='CPasdeLeMauxdePasse'
N8N_HOST='sub.domain.tld'
N8N_PORT='5678'
N8N_PROTOCOL='https'
NODE_ENV='production'
WEBHOOK_TUNNEL_URL='https://sub.domain.tld/'
GENERIC_TIMEZONE='Asia/Bangkok'

DB_TYPE='postgresdb'
DB_POSTGRESDB_HOST='postgres'
DB_POSTGRESDB_DATABASE='n8n'
DB_POSTGRESDB_USER='n8n'
DB_POSTGRESDB_PASSWORD='CPasdeLeMauxdePasse'

same thing happen with a mysqlDB

I recently deployed n8n with Postgres and here are my configs (it uses Traefik)

docker-compose.yml file

version: '3.1'

services:

  postgres:
    image: postgres
    restart: always
    environment:
      - POSTGRES_USER
      - POSTGRES_PASSWORD
      - POSTGRES_DB
      - POSTGRES_NON_ROOT_USER
      - POSTGRES_NON_ROOT_PASSWORD
    volumes:
      - ./init-data.sh:/docker-entrypoint-initdb.d/init-data.sh

  traefik:
    image: "traefik"
    restart: always
    command:
      - "--api=true"
      - "--api.insecure=true"
      - "--providers.docker=true"
      - "--providers.docker.exposedbydefault=false"
      - "--entrypoints.websecure.address=:443"
      - "--certificatesresolvers.mytlschallenge.acme.tlschallenge=true"
      - "--certificatesresolvers.mytlschallenge.acme.email=${SSL_EMAIL}"
      - "--certificatesresolvers.mytlschallenge.acme.storage=/letsencrypt/acme.json"
    ports:
      - "443:443"
    volumes:
      - ~/.n8n/letsencrypt:/letsencrypt
      - /var/run/docker.sock:/var/run/docker.sock:ro
  n8n:
    image: n8nio/n8n
    restart: always
    labels:
      - traefik.enable=true
      - traefik.http.routers.n8n.rule=Host(`${SUBDOMAIN}.${DOMAIN_NAME}`)
      - traefik.http.routers.n8n.tls=true
      - traefik.http.routers.n8n.entrypoints=websecure
      - traefik.http.routers.n8n.tls.certresolver=mytlschallenge
      - traefik.http.middlewares.n8n.headers.SSLRedirect=true
      - traefik.http.middlewares.n8n.headers.STSSeconds=315360000
      - traefik.http.middlewares.n8n.headers.browserXSSFilter=true
      - traefik.http.middlewares.n8n.headers.contentTypeNosniff=true
      - traefik.http.middlewares.n8n.headers.forceSTSHeader=true
      - traefik.http.middlewares.n8n.headers.SSLHost=${DOMAIN_NAME}
      - traefik.http.middlewares.n8n.headers.STSIncludeSubdomains=true
      - traefik.http.middlewares.n8n.headers.STSPreload=true
    environment:
      - DB_TYPE=postgresdb
      - DB_POSTGRESDB_HOST=postgres
      - DB_POSTGRESDB_PORT=5432
      - DB_POSTGRESDB_DATABASE=${POSTGRES_DB}
      - DB_POSTGRESDB_USER=${POSTGRES_NON_ROOT_USER}
      - DB_POSTGRESDB_PASSWORD=${POSTGRES_NON_ROOT_PASSWORD}
      - N8N_BASIC_AUTH_ACTIVE=true
      - N8N_BASIC_AUTH_USER
      - N8N_BASIC_AUTH_PASSWORD
      - N8N_HOST=${SUBDOMAIN}.${DOMAIN_NAME}
      - N8N_PORT=5678
      - N8N_PROTOCOL=https
      - NODE_ENV=production
      - WEBHOOK_TUNNEL_URL=https://${SUBDOMAIN}.${DOMAIN_NAME}/
      - GENERIC_TIMEZONE=${GENERIC_TIMEZONE}
    ports:
      - 5678:5678
    links:
      - postgres
    volumes:
      - ~/.n8n:/home/node/.n8n
    # Wait 5 seconds to start n8n to make sure that PostgreSQL is ready
    # when n8n tries to connect to it
    command: /bin/sh -c "sleep 5; n8n start"

init-data.sh file

#!/bin/bash
set -e;


if [ -n "${POSTGRES_NON_ROOT_USER:-}" ] && [ -n "${POSTGRES_NON_ROOT_PASSWORD:-}" ]; then
	psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" --dbname "$POSTGRES_DB" <<-EOSQL
		CREATE USER ${POSTGRES_NON_ROOT_USER} WITH PASSWORD '${POSTGRES_NON_ROOT_PASSWORD}';
		GRANT ALL PRIVILEGES ON DATABASE ${POSTGRES_DB} TO ${POSTGRES_NON_ROOT_USER};
	EOSQL
else
	echo "SETUP INFO: No Environment variables given!"
fi

.env file

POSTGRES_USER=username
POSTGRES_PASSWORD=password
POSTGRES_DB=n8ndb

POSTGRES_NON_ROOT_USER=username
POSTGRES_NON_ROOT_PASSWORD=password

N8N_BASIC_AUTH_USER=usename
N8N_BASIC_AUTH_PASSWORD=password
SUBDOMAIN=subdomain
DOMAIN_NAME=domain.tld
GENERIC_TIMEZONE=Europe/Berlin
[email protected]

I am using the docker-compose file from here: n8n/docker/compose/withPostgres at master · n8n-io/n8n · GitHub

I hope this helps you

This is strange, here is the debug output when executing a workflow manually:

Jul 28 23:41:31 172.18.0.1 - - [28/Jul/2021:21:41:31 +0000] "POST /rest/workflows/run HTTP/1.1" 200 28 "https://test.cloudron.dev/workflow/1" "Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Firefox/91.0"
Jul 28 23:41:31
Jul 28 23:41:31 Loading configuration overwrites from:
Jul 28 23:41:31 - /app/data/.n8n/app-config.json
Jul 28 23:41:31
Jul 28 23:41:31 2021-07-28T21:41:31.691Z | debug | Received child process message of type start for execution ID 3. {"executionId":"3","file":"WorkflowRunner.js"}
Jul 28 23:41:31 2021-07-28T21:41:31.696Z | verbose | Initializing n8n sub-process {"pid":62,"workflowId":"1","file":"WorkflowRunnerProcess.js","function":"runWorkflow"}
Jul 28 23:41:31 2021-07-28T21:41:31.703Z | verbose | Workflow execution started {"workflowId":"1","file":"WorkflowExecute.js","function":"processRunExecutionData"}
Jul 28 23:41:31 2021-07-28T21:41:31.704Z | debug | Received child process message of type processHook for execution ID 3. {"executionId":"3","file":"WorkflowRunner.js"}
Jul 28 23:41:31 2021-07-28T21:41:31.704Z | debug | Start processing node "Cron" {"node":"Cron","workflowId":"1","file":"WorkflowExecute.js"}
Jul 28 23:41:31 2021-07-28T21:41:31.704Z | debug | Executing hook (hookFunctionsPush) {"executionId":"3","sessionId":"woupbridiqe","workflowId":"1","file":"WorkflowExecuteAdditionalData.js","function":"workflowExecuteBefore"}
Jul 28 23:41:31 2021-07-28T21:41:31.705Z | debug | Send data of type "executionStarted" to editor-UI {"dataType":"executionStarted","sessionId":"woupbridiqe","file":"Push.js","function":"send"}
Jul 28 23:41:31 2021-07-28T21:41:31.705Z | debug | Running node "Cron" started {"node":"Cron","workflowId":"1","file":"WorkflowExecute.js"}
Jul 28 23:41:31 2021-07-28T21:41:31.705Z | debug | Received child process message of type processHook for execution ID 3. {"executionId":"3","file":"WorkflowRunner.js"}
Jul 28 23:41:31 2021-07-28T21:41:31.705Z | debug | Executing hook on node "Cron" (hookFunctionsPush) {"executionId":"3","sessionId":"woupbridiqe","workflowId":"1","file":"WorkflowExecuteAdditionalData.js","function":"nodeExecuteBefore"}
Jul 28 23:41:31 2021-07-28T21:41:31.705Z | debug | Send data of type "nodeExecuteBefore" to editor-UI {"dataType":"nodeExecuteBefore","sessionId":"woupbridiqe","file":"Push.js","function":"send"}
Jul 28 23:41:31 2021-07-28T21:41:31.711Z | debug | Running node "Cron" finished successfully {"node":"Cron","workflowId":"1","file":"WorkflowExecute.js"}
Jul 28 23:41:31 2021-07-28T21:41:31.711Z | debug | Received child process message of type processHook for execution ID 3. {"executionId":"3","file":"WorkflowRunner.js"}
Jul 28 23:41:31 2021-07-28T21:41:31.711Z | debug | Executing hook on node "Cron" (hookFunctionsPush) {"executionId":"3","sessionId":"woupbridiqe","workflowId":"1","file":"WorkflowExecuteAdditionalData.js","function":"nodeExecuteAfter"}
Jul 28 23:41:31 2021-07-28T21:41:31.711Z | debug | Start processing node "CoinGecko" {"node":"CoinGecko","workflowId":"1","file":"WorkflowExecute.js"}
Jul 28 23:41:31 2021-07-28T21:41:31.711Z | debug | Send data of type "nodeExecuteAfter" to editor-UI {"dataType":"nodeExecuteAfter","sessionId":"woupbridiqe","file":"Push.js","function":"send"}
Jul 28 23:41:31 2021-07-28T21:41:31.712Z | debug | Received child process message of type processHook for execution ID 3. {"executionId":"3","file":"WorkflowRunner.js"}
Jul 28 23:41:31 2021-07-28T21:41:31.712Z | debug | Running node "CoinGecko" started {"node":"CoinGecko","workflowId":"1","file":"WorkflowExecute.js"}
Jul 28 23:41:31 2021-07-28T21:41:31.712Z | debug | Executing hook on node "CoinGecko" (hookFunctionsPush) {"executionId":"3","sessionId":"woupbridiqe","workflowId":"1","file":"WorkflowExecuteAdditionalData.js","function":"nodeExecuteBefore"}
Jul 28 23:41:31 2021-07-28T21:41:31.712Z | debug | Send data of type "nodeExecuteBefore" to editor-UI {"dataType":"nodeExecuteBefore","sessionId":"woupbridiqe","file":"Push.js","function":"send"}
Jul 28 23:41:31 2021-07-28T21:41:31.802Z | debug | Running node "CoinGecko" finished successfully {"node":"CoinGecko","workflowId":"1","file":"WorkflowExecute.js"}
Jul 28 23:41:31 2021-07-28T21:41:31.803Z | verbose | Workflow execution finished successfully {"workflowId":"1","file":"WorkflowExecute.js","function":"processSuccessExecution"}
Jul 28 23:41:31 2021-07-28T21:41:31.803Z | debug | Received child process message of type processHook for execution ID 3. {"executionId":"3","file":"WorkflowRunner.js"}
Jul 28 23:41:31 2021-07-28T21:41:31.803Z | debug | Executing hook on node "CoinGecko" (hookFunctionsPush) {"executionId":"3","sessionId":"woupbridiqe","workflowId":"1","file":"WorkflowExecuteAdditionalData.js","function":"nodeExecuteAfter"}
Jul 28 23:41:31 2021-07-28T21:41:31.803Z | debug | Send data of type "nodeExecuteAfter" to editor-UI {"dataType":"nodeExecuteAfter","sessionId":"woupbridiqe","file":"Push.js","function":"send"}
Jul 28 23:41:31 2021-07-28T21:41:31.806Z | debug | Received child process message of type processHook for execution ID 3. {"executionId":"3","file":"WorkflowRunner.js"}
Jul 28 23:41:31 2021-07-28T21:41:31.806Z | debug | Executing hook (hookFunctionsSave) {"executionId":"3","workflowId":"1","file":"WorkflowExecuteAdditionalData.js","function":"workflowExecuteAfter"}
Jul 28 23:41:31 2021-07-28T21:41:31.806Z | debug | Save execution data to database for execution ID 3 {"executionId":"3","workflowId":"1","finished":true,"stoppedAt":"2021-07-28T21:41:31.803Z","file":"WorkflowExecuteAdditionalData.js","function":"workflowExecuteAfter"}
Jul 28 23:41:31 2021-07-28T21:41:31.809Z | debug | Received child process message of type end for execution ID 3. {"executionId":"3","file":"WorkflowRunner.js"}
Jul 28 23:41:31 2021-07-28T21:41:31.815Z | debug | Executing hook (hookFunctionsPush) {"executionId":"3","sessionId":"woupbridiqe","workflowId":"1","file":"WorkflowExecuteAdditionalData.js","function":"workflowExecuteAfter"}
Jul 28 23:41:31 2021-07-28T21:41:31.815Z | debug | Save execution progress to database for execution ID 3 {"executionId":"3","workflowId":"1","file":"WorkflowExecuteAdditionalData.js","function":"workflowExecuteAfter"}
Jul 28 23:41:31 2021-07-28T21:41:31.815Z | debug | Send data of type "executionFinished" to editor-UI {"dataType":"executionFinished","sessionId":"woupbridiqe","file":"Push.js","function":"send"}

and when you press the Stop button:
image

With this debug log:

Jul 28 23:42:09 ERROR RESPONSE
Jul 28 23:42:09 Error: The execution id "3" could not be found.
Jul 28 23:42:09 at /usr/local/node-14.17.0/lib/node_modules/n8n/dist/src/Server.js:1270:27
Jul 28 23:42:09 at processTicksAndRejections (internal/process/task_queues.js:95:5)
Jul 28 23:42:09 at async /usr/local/node-14.17.0/lib/node_modules/n8n/dist/src/ResponseHelper.js:86:26
Jul 28 23:42:09 172.18.0.1 - - [28/Jul/2021:21:42:09 +0000] "POST /rest/executions-current/3/stop HTTP/1.1" 500 380 "https://test.cloudron.dev/workflow/1" "Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Firefox/91.0"
Jul 28 23:42:09 172.18.0.1 - - [28/Jul/2021:21:42:09 +0000] "GET /rest/executions/3 HTTP/1.1" 200 4027 "https://test.cloudron.dev/workflow/1" "Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Firefox/91.0"
1 Like

My docker-compose at home, like probably everyone, is a little different; I made it modular so every app/service have is own -compose.yml

but if I put them all together it looks like

version: “3”
services:
postgres:
image: postgres:13-alpine
container_name: postgres
env_file:
- $ENVFILE/postgres
volumes:
- $VOLUMES/postgres:/var/lib/postgresql/data
restart: $RESTART
n8n:
image: n8nio/n8n
container_name: n8n
env_file:
- $ENVFILE/n8n
volumes:
- $VOLUMES/config/n8n:/home/node/.n8n
restart: $RESTART
swag:
image: ghcr.io/linuxserver/swag
container_name: swag
env_file:
- $ENVFILE/linuxserver
- $ENVFILE/swag
- $ENVFILE/cloudflared
volumes:
- $VOLUMES/config/swag:/config
ports:
- 443:443
restart: $RESTART

env file

postgres

POSTGRES_DB=postgres
POSTGRES_USER=postgres
POSTGRES_PASSWORD=

n8n

N8N_BASIC_AUTH_ACTIVE=‘true’
N8N_BASIC_AUTH_USER=‘user’
N8N_BASIC_AUTH_PASSWORD=‘password’
N8N_HOST=‘sub.domain.tld’
N8N_PORT=‘5678’
N8N_PROTOCOL=‘https’
NODE_ENV=‘production’
WEBHOOK_TUNNEL_URL=‘https://sub.domain.tld/
GENERIC_TIMEZONE=‘Asia/Bangkok’

#DB_TYPE=‘postgresdb’
#DB_POSTGRESDB_HOST=‘postgres’
#DB_POSTGRESDB_DATABASE=‘n8n’
#DB_POSTGRESDB_USER=‘n8n’
#DB_POSTGRESDB_PASSWORD=’’

DB_TYPE=‘mysqldb’
DB_MYSQLDB_HOST=‘percona’
DB_MYSQLDB_DATABASE=‘n8n’
DB_MYSQLDB_USER=‘n8n’
DB_MYSQLDB_PASSWORD=’’

@harshil1712 everything works well when I use SQLite but don’t if I try MySQL like or PostgreSQL

Sorry to drag this longer than it should but
as @harshil1712 assumed we/I use the docker-compose.yml

I opened the port 5678 and access it directly with this port instead of behind NGINX
and n8n + mysql or postgres works well if we abstract the proxy (NGINX).

and now it works well, even behind NGINX if I use

   proxy_pass http://localhost:5678;
   proxy_set_header Connection '';
   proxy_http_version 1.1;
   chunked_transfer_encoding off;
   proxy_buffering off;
   proxy_cache off;

as mentioned here: Nginx configuration - #7 by Arnaud

1 Like

I am glad you found a solution!

Thank you for sharing the solution, I am sure it will be helpful to others :slight_smile:

Odd since this is part of the default configuration we do for the n8n app nginx.

nginx.con

daemon off;
worker_processes auto;
pid /run/nginx.pid;
error_log stderr;

events {
    worker_connections 768;
    # multi_accept on;
}

http {

   ##
   # Basic Settings
   ##

   sendfile on;
   tcp_nopush on;
   tcp_nodelay on;
   keepalive_timeout 65;
   types_hash_max_size 2048;

   include /etc/nginx/mime.types;
   default_type application/octet-stream;

   client_body_temp_path /run/client_body;
   proxy_temp_path /run/proxy_temp;
   fastcgi_temp_path /run/fastcgi_temp;
   scgi_temp_path /run/scgi_temp;
   uwsgi_temp_path /run/uwsgi_temp;

   ##
   # Logging Settings
   ##

   access_log /dev/stdout;

   ##
   # Gzip Settings
   ##

   gzip on;
   gzip_disable "msie6";

   ##
   # Virtual Host Configs
   ##

   server {
      listen 3000;

      server_name ##HOSTNAME##;

      location = /healthcheck {
         return 200;
      }

      location / {
          proxy_set_header Host $http_host;
          proxy_set_header X-Forwarded-Proto $scheme;
          proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
          proxy_set_header X-Real-IP $remote_addr;

          # This part is especially N8N-specific: https://community.n8n.io/t/nginx-configuration/111/7
          proxy_set_header Connection '';
          proxy_http_version 1.1;
          chunked_transfer_encoding off;
          proxy_buffering off;
          proxy_cache off;

          proxy_read_timeout 120;
          proxy_connect_timeout 10;
          proxy_pass http://127.0.0.1:5678/;
          proxy_redirect http://127.0.0.1:5678/ /;
      }
   }
}

@BrutalBirdie

so then I don’t know what is the issue on the cloudron side
but with linuxserver nginx proxy (SWAG) this his the solution

I just fixed this issue, with the next update execute node is also working.
The problem was that each app has its own nginx proxy settings on the root level and the n8n app also hat its nginx config inside the app.
And this did not play well together.

So now the nginx inside the app is gone and only the host level nginx is active and working as intended.

3 Likes