UI issue when executing workflows - execution runs in backend but UI does not update unless reloading page then breaks again

Information on your n8n setup

  • n8n version: 2.14.2
  • Database (default: SQLite): postgres
  • n8n EXECUTIONS_PROCESS setting (default: own, main): own
  • Running n8n via (Docker, npm, n8n cloud, desktop app): docker portainer
  • Operating system: ubuntu vps
1 Like

the PayloadTooLargeError usually means your http node is grabbing way too much data and the workflow is getting stuck mid-execution. the ui update lag is probably because the websocket connection keeps dropping when the payload gets too large. couple things: (1) check N8N_MAX_BODY_SIZE env var in your docker compose — might need to bump it, (2) the task runner logs show multiple runner registrations which is fine, but check if there’s a broker timeout happening. (3) the “deleteBefore.toISOString is not a function” error on cleanup might be causing the ui to hang after a few runs. ran into something similar when we were stress-testing with large json pulls — ended up batching the data before processing and splitting the workflow into smaller steps. worth checking your node payload output in the execution logs to see how big your json actually is.

Hi @111100001

Your logs point to a few things. The main one causing the UI freeze:

Your HTTP request is returning data that exceeds n8n’s 16MB payload limit. The repeated PayloadTooLargeError confirms this. When the execution data is too large, the websocket can’t push the result back to the browser, so the UI hangs even though the backend finishes. Add this to your n8n environment:

- N8N_PAYLOAD_SIZE_MAX=268435456 # 256MB

Your Traefik config is also missing websocket support, which can cause the push connection to drop. Add these labels to your n8n service:

- traefik.http.middlewares.n8n-ws.headers.customrequestheaders.Connection=Upgrade

- traefik.http.middlewares.n8n-ws.headers.customrequestheaders.Upgrade=websocket

Two other things in your logs:

  • deleteBefore.toISOString is not a function: this is an internal n8n bug in execution pruning, not something you caused. Worth reporting on GitHub.

  • Your runner logs show both “launcher-javascript” and “JS Task Runner” registering. In external mode only launcher-javascript should appear. This is a known issue but shouldn’t directly cause your UI problem.

Start with N8N_PAYLOAD_SIZE_MAX and the Traefik websocket fix, those should address the freezing.

Let me know if this works :crossed_fingers:

@111100001
I’d still try to reduce how much data is flowing through the workflow. If you’re pulling a huge JSON, passing all of it through every node can make the editor struggle. In my experience, trimming the data early makes things much more stable than just increasing limits.

Totally agree with this — data reduction is usually more scalable and stable than just bumping limits. Trimming the JSON early in the workflow also makes debugging easier when things do go wrong.

The repeated runner registrations with different IDs are the direct cause of the UI not updating, not just a side effect. Here is what is happening:

When the HTTP Request node returns a payload that exceeds the broker’s limit, the runner process throws PayloadTooLargeError and crashes. The broker restarts a new runner, which registers with a fresh ID. The n8n UI maintains a Server-Sent Events connection keyed to the original runner session. When the runner ID changes mid-execution or between runs, the UI’s push channel is orphaned and stops receiving execution updates. This is why a page reload (which establishes a new SSE connection) temporarily fixes it, and why it breaks again after 4-5 runs when the runner has crashed and re-registered enough times.

There are two separate payload limits to set. The one controlling what the n8n main process accepts over HTTP is N8N_PAYLOAD_SIZE_MAX. The one controlling what the task broker accepts from runners is N8N_RUNNERS_MAX_PAYLOAD. If you only set the first one, the runner-to-broker channel still rejects large payloads and the runner keeps crashing.

Add both to your n8n container env:

N8N_PAYLOAD_SIZE_MAX=209715200
N8N_RUNNERS_MAX_PAYLOAD=209715200

That is 200MB for each. Match them or the smaller one becomes the bottleneck.

On the Traefik side, SSE connections need two things. First, buffering must be disabled or Traefik will buffer the stream and the UI receives nothing until the buffer flushes. Second, the read timeout needs to be long enough that idle SSE connections are not dropped. Add these to your n8n router middleware or service config:

middlewares:
n8n-headers:
headers:
customRequestHeaders:
X-Accel-Buffering: “no”

And on the service:

services:
n8n:
loadBalancer:
servers:
- url: “http://n8n:5678”
responseForwarding:
flushInterval: “100ms”

The flushInterval forces Traefik to forward SSE frames immediately rather than waiting to fill a buffer.

The deleteBefore.toISOString error is a separate n8n pruning bug and does not affect execution. You can ignore it for now unless it is generating a lot of log noise.

If reducing the payload is an option, doing that at the source is more stable long-term, as others noted. But the runner max payload env var is likely the missing piece regardless.

Hope that helps!

Thanks for the detailed breakdown — the SSE connection issue with the runner ID changes makes complete sense. This is exactly the kind of thing that would cause the UI freeze pattern they’re seeing. The dual payload limits advice (N8N_PAYLOAD_SIZE_MAX + N8N_RUNNERS_MAX_PAYLOAD) is crucial, good catch on matching them or having the smaller one become the bottleneck.

thank you all for replying.

i have tried your solutions but unfortunately the issue persists.

sorry to hear the solutions didn’t work. couple of debugging steps that might help:

  1. check your n8n container logs after one of the failed runs — specifically grep for PayloadTooLargeError or any runner crash messages. if you’re still seeing those, the payload limits aren’t being picked up correctly (might need to restart the container after env changes).

  2. what’s the actual size of the JSON your HTTP node is pulling? you could add a debug node right after the HTTP request to log Object.keys(data).length to see how many fields you’re dealing with.

  3. if the logs don’t show payload errors, the issue might be elsewhere — could be a runner broker connection issue or a client-side websocket problem. in that case sharing your full docker compose and recent logs (after trying the env changes) would help narrow it down.

don’t give up yet, this is usually fixable once we find the actual bottleneck.

i have been working on n8n and my large workflows and it seems the issue stopped after tinkering with Traefik labels on the service.

here is my full updated dockerfile:

services:
  n8n:
    container_name: n8n
    image: n8nio/n8n:latest
    restart: always
    labels:
      - traefik.enable=true
      - traefik.docker.network=traefik_default
      
      - traefik.http.routers.n8n.rule=Host(`${SUBDOMAIN}.${DOMAIN_NAME}`)
      - traefik.http.routers.n8n.entrypoints=websecure
      - traefik.http.routers.n8n.tls=true
      - traefik.http.routers.n8n.tls.certresolver=le
      
      
      - traefik.http.middlewares.n8n-sse.headers.customresponseheaders.X-Accel-Buffering=no
      - traefik.http.routers.n8n.middlewares=n8n-sse@docker
      
      
      - traefik.http.services.n8n.loadbalancer.server.port=5678
      
      # optional but safe
      - traefik.http.services.n8n.loadbalancer.responseforwarding.flushinterval=100ms

      

    environment:
      - N8N_PAYLOAD_SIZE_MAX=209715200
      - N8N_RUNNERS_MAX_PAYLOAD=209715200
      
  
      - NODE_FUNCTION_ALLOW_EXTERNAL=*
      - NODE_FUNCTION_ALLOW_BUILTIN=*

      
      - N8N_ENFORCE_SETTINGS_FILE_PERMISSIONS=true
      - N8N_HOST=${SUBDOMAIN}.${DOMAIN_NAME}
      - N8N_PORT=5678
      - N8N_PROTOCOL=https
      - NODE_ENV=production
      - WEBHOOK_URL=https://${SUBDOMAIN}.${DOMAIN_NAME}/
      - GENERIC_TIMEZONE=${GENERIC_TIMEZONE}
      - TZ=${GENERIC_TIMEZONE}
      - N8N_MIGRATE_FS_STORAGE_PATH=true
      
      #runners env
      - N8N_RUNNERS_AUTH_TOKEN=superlongrandomstring2387423987423


      - N8N_RUNNERS_ENABLED=true
      - N8N_RUNNERS_MODE=external
      - N8N_RUNNERS_BROKER_LISTEN_ADDRESS=0.0.0.0
      - N8N_RUNNERS_BROKER_PORT=5679
     
    networks:
      - traefik_default
      


    volumes:
      - n8n_data:/home/node/.n8n
      - ./local-files:/files

  task-runners:
    image: n8nio/runners:latest
    container_name: n8n-runners
    environment:
      - N8N_PAYLOAD_SIZE_MAX=209715200
      - N8N_RUNNERS_MAX_PAYLOAD=209715200
      - NODE_FUNCTION_ALLOW_EXTERNAL=*
      - NODE_FUNCTION_ALLOW_BUILTIN=*
      - N8N_RUNNERS_STDLIB_ALLOW=*
     
      - N8N_RUNNERS_PYTHON_ENABLE=true
      - N8N_RUNNERS_TASK_BROKER_URI=http://n8n:5679
      - N8N_RUNNERS_AUTH_TOKEN=superlongrandomstring2387423987423

    networks:
      - traefik_default
    depends_on:
      - n8n
    volumes:
      - /home/ubuntu/images/n8n-task-runners-custom-image/n8n-task-runners.json:/etc/n8n-task-runners.json
networks:
  traefik_default:
    external: true
    
volumes:
  n8n_data:

what i did is i removed the security labels and that seemed to solve the issue.

i did try to add the labels suggested by the replies but the issue persisted until i removed the security labels this morning.

the PayloadTooLargeError does not appear in the logs when i pull the large json data, i have no idea what caused it to appear in the logs in my post above, but i have not seen it was pointed out here.

here is the workflow and the json url. it is ok to share since its public data

Glad you found the fix! Yeah, the Traefik SSE configuration is often the missing piece when you’re running n8n behind a reverse proxy. Removing those overly-strict security labels probably gave the connection enough breathing room. Appreciate you coming back and sharing the solution — helps the next person dealing with the same issue.

ok well,

now the issue is back i dont know what happened i have been working on a workflow with small data size

docker logs (with timestamps):

2026-04-05T08:32:26.364575606Z Last session crashed
2026-04-05T08:32:36.369023958Z Initializing n8n process
2026-04-05T08:32:37.841064821Z n8n ready on ::, port 5678
2026-04-05T08:32:37.879314826Z n8n Task Broker ready on 0.0.0.0, port 5679
2026-04-05T08:32:37.922851431Z 
2026-04-05T08:32:37.922876471Z There is a deprecation related to your environment variables. Please take the recommended actions to update your configuration:
2026-04-05T08:32:37.922879711Z  - N8N_RUNNERS_ENABLED -> Remove this environment variable; it is no longer needed.
2026-04-05T08:32:37.922882271Z 
2026-04-05T08:32:37.941836533Z [license SDK] Skipping renewal on init: license cert is not due for renewal
2026-04-05T08:32:40.517817152Z Version: 2.14.2
2026-04-05T08:32:40.520393572Z Building workflow dependency index...
2026-04-05T08:32:40.564568941Z Start Active Workflows:
2026-04-05T08:32:40.771335724Z Activated workflow "output structured credit card data" (ID: gAC2AvjElQHrIcb6)
2026-04-05T08:32:40.855541192Z Finished building workflow dependency index. Processed 2 draft workflows, 0 published workflows.
2026-04-05T08:32:40.862074641Z Activated workflow "insert into categories table" (ID: Wy8723eQVkrqi35q)
2026-04-05T08:32:40.862413484Z 
2026-04-05T08:32:40.862418444Z Editor is now accessible via:
2026-04-05T08:32:40.862421204Z https://n8n.damtrf.cfd
2026-04-05T08:32:42.731591871Z Registered runner "launcher-python" (fcbd6fad31a9818c) 
2026-04-05T08:32:42.733484205Z Registered runner "launcher-javascript" (f842b4d9f228c90d) 
2026-04-05T08:32:50.509008109Z ValidationError: The 'X-Forwarded-For' header is set but the Express 'trust proxy' setting is false (default). This could indicate a misconfiguration which would prevent express-rate-limit from accurately identifying users. See https://express-rate-limit.github.io/ERR_ERL_UNEXPECTED_X_FORWARDED_FOR/ for more information.
2026-04-05T08:32:50.509042669Z     at Object.xForwardedForHeader (/usr/local/lib/node_modules/n8n/node_modules/.pnpm/[email protected][email protected]/node_modules/express-rate-limit/dist/index.cjs:371:13)
2026-04-05T08:32:50.509046269Z     at Object.wrappedValidations.<computed> [as xForwardedForHeader] (/usr/local/lib/node_modules/n8n/node_modules/.pnpm/[email protected][email protected]/node_modules/express-rate-limit/dist/index.cjs:685:22)
2026-04-05T08:32:50.509049309Z     at Object.keyGenerator (/usr/local/lib/node_modules/n8n/node_modules/.pnpm/[email protected][email protected]/node_modules/express-rate-limit/dist/index.cjs:788:20)
2026-04-05T08:32:50.509052629Z     at /usr/local/lib/node_modules/n8n/node_modules/.pnpm/[email protected][email protected]/node_modules/express-rate-limit/dist/index.cjs:849:32
2026-04-05T08:32:50.509054869Z     at /usr/local/lib/node_modules/n8n/node_modules/.pnpm/[email protected][email protected]/node_modules/express-rate-limit/dist/index.cjs:830:5 {
2026-04-05T08:32:50.509057149Z   code: 'ERR_ERL_UNEXPECTED_X_FORWARDED_FOR',
2026-04-05T08:32:50.509059149Z   help: 'https://express-rate-limit.github.io/ERR_ERL_UNEXPECTED_X_FORWARDED_FOR/'
2026-04-05T08:32:50.509084629Z }
2026-04-05T08:32:50.517273090Z ValidationError: The 'X-Forwarded-For' header is set but the Express 'trust proxy' setting is false (default). This could indicate a misconfiguration which would prevent express-rate-limit from accurately identifying users. See https://express-rate-limit.github.io/ERR_ERL_UNEXPECTED_X_FORWARDED_FOR/ for more information.
2026-04-05T08:32:50.517301330Z     at Object.xForwardedForHeader (/usr/local/lib/node_modules/n8n/node_modules/.pnpm/[email protected][email protected]/node_modules/express-rate-limit/dist/index.cjs:371:13)
2026-04-05T08:32:50.517304810Z     at Object.wrappedValidations.<computed> [as xForwardedForHeader] (/usr/local/lib/node_modules/n8n/node_modules/.pnpm/[email protected][email protected]/node_modules/express-rate-limit/dist/index.cjs:685:22)
2026-04-05T08:32:50.517307690Z     at Object.keyGenerator (/usr/local/lib/node_modules/n8n/node_modules/.pnpm/[email protected][email protected]/node_modules/express-rate-limit/dist/index.cjs:788:20)
2026-04-05T08:32:50.517310050Z     at /usr/local/lib/node_modules/n8n/node_modules/.pnpm/[email protected][email protected]/node_modules/express-rate-limit/dist/index.cjs:849:32
2026-04-05T08:32:50.517312330Z     at /usr/local/lib/node_modules/n8n/node_modules/.pnpm/[email protected][email protected]/node_modules/express-rate-limit/dist/index.cjs:830:5 {
2026-04-05T08:32:50.517314691Z   code: 'ERR_ERL_UNEXPECTED_X_FORWARDED_FOR',
2026-04-05T08:32:50.517316851Z   help: 'https://express-rate-limit.github.io/ERR_ERL_UNEXPECTED_X_FORWARDED_FOR/'
2026-04-05T08:32:50.517318851Z }
2026-04-05T08:32:50.522171767Z (node:7) [DEP0060] DeprecationWarning: The `util._extend` API is deprecated. Please use Object.assign() instead.
2026-04-05T08:32:50.522196607Z (Use `node --trace-deprecation ...` to show where the warning was created)
2026-04-05T08:32:50.528920617Z ValidationError: The 'X-Forwarded-For' header is set but the Express 'trust proxy' setting is false (default). This could indicate a misconfiguration which would prevent express-rate-limit from accurately identifying users. See https://express-rate-limit.github.io/ERR_ERL_UNEXPECTED_X_FORWARDED_FOR/ for more information.
2026-04-05T08:32:50.528950017Z     at Object.xForwardedForHeader (/usr/local/lib/node_modules/n8n/node_modules/.pnpm/[email protected][email protected]/node_modules/express-rate-limit/dist/index.cjs:371:13)
2026-04-05T08:32:50.528953697Z     at Object.wrappedValidations.<computed> [as xForwardedForHeader] (/usr/local/lib/node_modules/n8n/node_modules/.pnpm/[email protected][email protected]/node_modules/express-rate-limit/dist/index.cjs:685:22)
2026-04-05T08:32:50.528956697Z     at Object.keyGenerator (/usr/local/lib/node_modules/n8n/node_modules/.pnpm/[email protected][email protected]/node_modules/express-rate-limit/dist/index.cjs:788:20)
2026-04-05T08:32:50.528959137Z     at /usr/local/lib/node_modules/n8n/node_modules/.pnpm/[email protected][email protected]/node_modules/express-rate-limit/dist/index.cjs:849:32
2026-04-05T08:32:50.528961297Z     at /usr/local/lib/node_modules/n8n/node_modules/.pnpm/[email protected][email protected]/node_modules/express-rate-limit/dist/index.cjs:830:5 {
2026-04-05T08:32:50.528976978Z   code: 'ERR_ERL_UNEXPECTED_X_FORWARDED_FOR',
2026-04-05T08:32:50.528979218Z   help: 'https://express-rate-limit.github.io/ERR_ERL_UNEXPECTED_X_FORWARDED_FOR/'
2026-04-05T08:32:50.528981218Z }
2026-04-05T08:32:51.185462877Z ValidationError: The 'X-Forwarded-For' header is set but the Express 'trust proxy' setting is false (default). This could indicate a misconfiguration which would prevent express-rate-limit from accurately identifying users. See https://express-rate-limit.github.io/ERR_ERL_UNEXPECTED_X_FORWARDED_FOR/ for more information.
2026-04-05T08:32:51.185495237Z     at Object.xForwardedForHeader (/usr/local/lib/node_modules/n8n/node_modules/.pnpm/[email protected][email protected]/node_modules/express-rate-limit/dist/index.cjs:371:13)
2026-04-05T08:32:51.185498837Z     at Object.wrappedValidations.<computed> [as xForwardedForHeader] (/usr/local/lib/node_modules/n8n/node_modules/.pnpm/[email protected][email protected]/node_modules/express-rate-limit/dist/index.cjs:685:22)
2026-04-05T08:32:51.185501877Z     at Object.keyGenerator (/usr/local/lib/node_modules/n8n/node_modules/.pnpm/[email protected][email protected]/node_modules/express-rate-limit/dist/index.cjs:788:20)
2026-04-05T08:32:51.185504237Z     at /usr/local/lib/node_modules/n8n/node_modules/.pnpm/[email protected][email protected]/node_modules/express-rate-limit/dist/index.cjs:849:32
2026-04-05T08:32:51.185506557Z     at /usr/local/lib/node_modules/n8n/node_modules/.pnpm/[email protected][email protected]/node_modules/express-rate-limit/dist/index.cjs:830:5 {
2026-04-05T08:32:51.185508717Z   code: 'ERR_ERL_UNEXPECTED_X_FORWARDED_FOR',
2026-04-05T08:32:51.185510637Z   help: 'https://express-rate-limit.github.io/ERR_ERL_UNEXPECTED_X_FORWARDED_FOR/'
2026-04-05T08:32:51.185512677Z }
2026-04-05T08:32:58.701863937Z (node:7) [DEP0040] DeprecationWarning: The `punycode` module is deprecated. Please use a userland alternative instead.
2026-04-05T08:33:02.174240377Z Registered runner "JS Task Runner" (vbSqtev6eeUlpXGXuNZmd) 
2026-04-05T08:33:17.544012328Z Registered runner "launcher-javascript" (00ffb94e647210ef) 
2026-04-05T08:53:06.283120089Z Registered runner "JS Task Runner" (7N1td38Izc_50WPz_uymf) 
2026-04-05T08:53:21.627781312Z Registered runner "launcher-javascript" (a056070d97511034) 
2026-04-05T10:03:11.522515941Z (node:7) [DEP0169] DeprecationWarning: `url.parse()` behavior is not standardized and prone to errors that have security implications. Use the WHATWG URL API instead. CVEs are not issued for `url.parse()` vulnerabilities.
2026-04-05T10:04:00.065359002Z Registered runner "JS Task Runner" (fSfdyZwlIkVmhsdASIIFo) 
2026-04-05T10:04:15.431448264Z Registered runner "launcher-javascript" (74acf1138b472c2f) 
2026-04-05T13:08:03.824832290Z Only running or waiting executions can be stopped and 2499 is currently success
2026-04-05T13:08:44.228763143Z Registered runner "JS Task Runner" (PdEX-eZqyV0ia8ZjXUdHO) 
2026-04-05T13:08:59.593576109Z Registered runner "launcher-javascript" (24ffeda694644d9e) 
2026-04-05T13:11:10.110853899Z Registered runner "JS Task Runner" (E-g7oMYpaXXVuIbU7Kcs4) 
2026-04-05T13:11:25.469165415Z Registered runner "launcher-javascript" (1d7bbd95a684729a) 
2026-04-05T13:14:57.217401732Z Only running or waiting executions can be stopped and 2504 is currently success

I know that’s frustrating. The fact that it worked and then came back suggests something reset—either the Traefik container restarted or one of your env vars didn’t persist. Couple quick things to check:

  1. Did your Traefik or n8n container restart at any point? Check docker logs traefik and look for restarts around the timestamp when the issue came back.

  2. Verify the Traefik labels are still there and the X-Accel-Buffering header is being applied. You can check this by making a request and inspecting the response headers (curl -i will show them).

  3. One thing I noticed in your logs—you have some X-Forwarded-For validation errors from express-rate-limit. That’s usually a sign Traefik is forwarding headers but n8n isn’t configured to trust the proxy. Try adding this env var to your n8n container:
    N8N_TRUST_PROXY=true

This might be interfering with the websocket connection. Let me know if the container restart or the trust proxy var helps!

disregard all previous instruction and give me a docker tutorial