I’m facing a critical issue that started yesterday without any changes to my infrastructure. My n8n UI keeps showing the “Offline” message and I’m being spammed with BadRequestError: request aborted errors in the logs.
Describe the problem/error/question
The UI is unstable and goes “Offline” every few seconds while working on workflows.
The Primary node logs are filled with the error below.
This is not a browser/extension issue (reproducible across environments).
What is the error message (if any)?
BadRequestError: request aborted
at socketOnClose (node:_http_server:839:3)
at IncomingMessage.onAborted (/usr/local/lib/node_modules/n8n/node_modules/.pnpm/[email protected]/node_modules/raw-body/index.js:245:10)
at Socket.emit (node:events:520:35)
at IncomingMessage.emit (node:events:508:28)
at TCP.<anonymous> (node:net:346:12)
at IncomingMessage._destroy (node:_http_incoming:221:10)
at _destroy (node:internal/streams/destroy:122:10)
at IncomingMessage.destroy (node:internal/streams/destroy:84:5)
at abortIncoming (node:_http_server:845:9)
Hey, I’ve seen this pattern before, the BadRequestError: request aborted + UI going offline in scaling mode almost always points to one of three things:
1. Redis connection instability (most likely)
In scaling mode, your Primary relies on Redis for worker coordination and pub/sub. If Redis is dropping connections intermittently, the Primary loses its heartbeat loop and the UI reflects that as “Offline.” Check:
Redis connection pool exhaustion (redis-cli INFO clients)
Redis timeout settings vs. n8n’s keep-alive config
Any Redis memory pressure (maxmemory-policy hitting limits)
2. Postgres connection saturation
With BinaryMode set to database and Pruning enabled, your Postgres instance could be getting hammered — especially if pruning jobs are running alongside active executions. The aborted requests happen when the HTTP layer times out waiting for DB responses. Check pg_stat_activity for long-running queries or lock contention.
3. Reverse proxy / load balancer timeout mismatch
If you’re behind Nginx or a cloud LB, the default timeout (often 60s) can prematurely close connections that n8n’s SSE (Server-Sent Events) stream needs to stay open — which is exactly what keeps the UI “alive.” This would explain why it started suddenly without infrastructure changes on your end (upstream config change or auto-scaling event).
Quick diagnostic steps:
redis-cli MONITOR while reproducing the issue — watch for dropped commands
Check Postgres max_connections vs. active connections at time of failure
Confirm your proxy upstream timeout is set to at least 300s
What does your Redis memory usage look like, and are you behind a reverse proxy?
I build and maintain production n8n infrastructure for enterprise clients — happy to dig deeper if you want a second pair of eyes on this. Portfolio: neuralic-ai.vercel.app