N8N service deployed to railway crashes every few hours

Describe the problem/error/question

Hello, we’re experiencing issue with N8N crashing quite often now, every like few hours witout any issue, it looks like it’s working fine for some time, and then suddenly disconnects or something. Looks like it can’t recconect to REDIS, but this started occuring 2-3 days ago, and we didn’t even upgrade our services or anything, after we did and redeployed services, it has been working fine. I’ve seen some problems with queue, as we had multiple workflows running for like 100 hours and then just stopped.

What is the error message (if any)?

Please share your workflow

Share the output returned by the last node

Information on your n8n setup

  • n8n version: 2.8.3
  • Database (default: SQLite): PSQL
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app):
  • Operating system:
1 Like

@SamoM225 This sounds like a Redis connection stability issue.

Quick question?

  1. How are you running n8n?
  2. your Redis setup
  3. your n8n version
  4. Can you share the exact error logs from n8n?
  5. Any network changes in the last 2-3 days?
  6. Server restarts, updates, or maintenance?

The Redis connection issues are usually fixable with proper timeout configuration and resource allocation!

Trough dockerfile on railway, I’m pretty sure back in the time I was able to deploy whole N8N with DB redis + worker trough button on GH.
2.8.3
well, there’s not much of a error log, but let me do that:
```
Worker started execution 157079 (job 130758)

Worker finished execution 157079 (job 130758)

[Redis client] read ECONNRESET

Queue errored

read ECONNRESET

Lost Redis connection. Trying to reconnect in 1s… (0s/10s)

Recovered Redis connection

[Redis client] read ECONNRESET

Lost Redis connection. Trying to reconnect in 1s… (0s/10s)

Recovered Redis connection

[Redis client] read ECONNRESET

Queue errored

read ECONNRESET

[Redis client] read ECONNRESET

Lost Redis connection. Trying to reconnect in 1s… (0s/10s)

Recovered Redis connection

[Redis client] read ECONNRESET

[Redis client] read ECONNRESET

[Redis client] read ECONNRESET

Queue errored

read ECONNRESET

[Redis client] read ECONNRESET

Lost Redis connection. Trying to reconnect in 1s… (0s/10s)

Recovered Redis connection

Worker started execution 157081 (job 130759)

Worker finished execution 157081 (job 130759)

[Redis client] read ECONNRESET

Queue errored

read ECONNRESET

[Redis client] read ECONNRESET

Lost Redis connection. Trying to reconnect in 1s… (0s/10s)

Recovered Redis connection

[Redis client] read ECONNRESET

Lost Redis connection. Trying to reconnect in 1s… (0s/10s)

Recovered Redis connection

[Redis client] read ECONNRESET

[Redis client] read ECONNRESET

Unable to connect to Redis after trying to connect for 10s

Exiting process due to Redis connection error
```
No network changes, I’ve managed to find that this issue (with queues) has been happening since last friday, where Railway had like big outage, but first crash was on wednesday this week. Before that, nothing unusual happend.
And by the way, I see that redis couldn’t be connected to, but when I scroll back, I can clearly see that the service has been able to log into redis after few tries, so it is not configured badly I suppose

hey, that ECONNRESET stuff is classic Railway networking flakiness honestly, their internal networking can get weird with long-lived Redis connections. Try bumping your QUEUE_BULL_REDIS_TIMEOUT_THRESHOLD env var to something like 30000 and see if that helps it survive the occasional blip. Also worth checking if Railway’s Redis instance has memory limits you’re hitting, their shared Redis can be pretty constrained.

Thank’s for the reply, however, I can’t seem to find var as you mentioned it, should I add it to the primary deployment or worker?

Add it to the primary deployment, that’s where the queue connection gets established. If you’re using separate workers you might need it there too but start with primary and see if it stabilizes.