Timeout Error in multiple nodes, across multiple workflows

Describe the issue/error/question

I’m seeing multiple workflows fail due to nodes failing with Timeout error.
The nodes failing so far: Telegram, Airtable, Discord

Error:

{"message":"timeout of 300000ms exceeded","name":"Error","stack":"Error: timeout of 300000ms exceeded\n    at createError (/usr/local/lib/node_modules/n8n/node_modules/axios/lib/core/createError.js:16:15)\n    at RedirectableRequest.handleRequestTimeout (/usr/local/lib/node_modules/n8n/node_modules/axios/lib/adapters/http.js:303:16)\n    at RedirectableRequest.emit (node:events:527:28)\n    at RedirectableRequest.emit (node:domain:475:12)\n    at Timeout.<anonymous> (/usr/local/lib/node_modules/n8n/node_modules/follow-redirects/index.js:164:12)\n    at listOnTimeout (node:internal/timers:559:17)\n    at processTimers (node:internal/timers:502:7)","code":"ECONNABORTED"}

The workflows have default timeout of at least an hour.

Also, this is the first time that i’ve seen this error.

Information on your n8n setup

  • n8n version: 0.179.0
  • Database you’re using (default: SQLite): Postgres
  • Running n8n with the execution process [own(default), main]: main
  • Running n8n via [Docker, npm, n8n.cloud, desktop app]: Docker

Hey @shrey-42, this sounds like either your system or your network (or possibly your database) isn’t quite working as expected or doesn’t have enough resources.

Could you try running your failed workflow in a different environment and verify whether the errors still persist there? You can spin up n8n using docker and a local SQLite database by running docker run it --rm --name n8n -p 5678:5678 -v ~/.n8n:/home/node/.n8n n8nio/n8n:0.179.0.

This would also be a great cross-check for the other error you have reported here.

It’s also worth mentioning that running n8n in main mode means that all workflows are executed in the main n8n process. So if one of your executions is particularly demanding it could possibly block other executions. You could verify if this mode might be the bottleneck here by (temporarily) switching to own mode.

Hi @MutedJam , i actually came across this issue in Github, already.

So, even though there might be an issue in my environment (that i need to diagnose), the node timeout bug (or feature) is also an issue.

As docker’s not properly my expertise, it would be a great help if you could also point out to me some resource/link on how to debug the n8n container’s interactions with the docker system.

Because, i’m suddenly seeing all these issues in my instance (also multiple container restarts happening every hour), i suspect that the docker-n8n relationship is somehow disturbed. This does not seem to affect some of the other containers running in the same docker system (postgres, traefik, portainer etc.)

Also, i would like to explore the own mode, but my understanding is that it would demand more resources from the environment and might actually hinder simultaneous, large number of executions. Do correct me if i’m wrong.
Thanks.

So own mode can lead to a higher overall system load (because n8n will use more than 1 CPU core), but it would also mean that a single and very demanding execution isn’t blocking everything else happening in n8n.

If the switch to own mode doesn’t improve the situation, my first step with debugging your docker environment would be what I have suggested already. Isolate the problematic workflow and run it in a different environment to verify whether this is indeed a problem with your specific environment or with the workflow in general.