N8N long running workflows run indefinitely

Describe the problem/error/question

Have several workflows that take around 40 minutes to complete and once they get past a certain time they will never stop running. The workflow has no timeout set, I have tried a few work arounds with no luck. Is there something I am missing?

What is the error message (if any)?

Please share your workflow

(Select the nodes on your canvas and use the keyboard shortcuts CMD+C/CTRL+C and CMD+V/CTRL+V to copy and paste the workflow.)

Share the output returned by the last node

Information on your n8n setup

  • n8n version: 1.6.1
  • Database (default: SQLite): SQLite
  • n8n EXECUTIONS_PROCESS setting (default: own, main): main
  • Running n8n via (Docker, npm, n8n cloud, desktop app): docker
  • Operating system: Ubuntu 22.04.3 LTS

Hi @Perps01, welcome to the community.

Without an example to reproduce your problem it’s hard to say for certain what’s happening here. In general, n8n’s timeout logic is somewhat limited. n8n will only check after each node execution if the timeout has been reached, but it won’t be able to stop a currently running node.

Perhaps in a first step you can check your server logs for any errors? An unexpected crash for example might leave your database in a state where the workflow has a start time but no final status, causing it to show up as “running” in the execution list without actually doing anything.

Specifically it’s with long running SSH node executions (Command Execution). I have created a workaround that seems to be working better, which is don’t wait on the command and just loop over a process list check on the SSH host. Another thing to note is that we are currently running behind an AWS EC2 load balancer. I bring this up because I do see a “Connection Lost” in the upper right at times, which could have something to do with losing execution flow via UI.

1 Like

Hi @Perps01, I am very sorry, though I am glad you found a workaround.

Seeing the SSH node works in a way different from pretty much any other node (which would use HTTP requests under the hood), I suspect this could be the culprit here. Which ssh command exactly are you running?