Workflow fails when sending in many requests (i.e. 50, 100, etc.)

Describe the issue/error/question

When sending in 100 requests to n8n’s Webhook node, we only get 7 successfully go through the workflow and it errors out. We’ll see Connection Lost in the upper right and a pop up in the bottom right something along the lines of workflow failed or workflow error.

We increased our CPU / Memory in Lightsail to see if it’ll help since we saw 100% CPU usage when doing the 100 requests. After upgrading, we tried sending in 50 requests but the same issue occurred where only about 7 successfully went through and workflow just bonked out. Then, later in the evening 30 more successfully went through.

The workflow runs fine when you send in 1 request or a few at a time but seems to overload when sending in many requests.

My Questions:

  • When it overloads and connection is lost, what happens to the queued ones when this happens? (i.e. what happened to those 93 that never went through from the 100 request send?)
  • Do they get queued or do they just disappear?
  • Is a load balancer recommended?
  • Any limitations with n8n/our database and how many requests that can be received? While we did this as sort of a stress test, it’s very possible that we do send that many requests or even much more in the future and want to not have this happen.

What is the error message (if any)?

Connection Lost.
Workflow error / workflow failed (something along those lines - i do not recall exactly)

Please share the workflow

Information on your n8n setup

  • n8n version: 0.185.0
  • Database you’re using (default: SQLite): PostgreSQL database (12.10)
  • Running n8n with the execution process [own(default), main]: own or whatever the default is as we have not adjusted this
  • Running n8n via [Docker, npm, n8n.cloud, desktop app]: Docker

Can you please make sure to fill all the requested information. The following two are still missing:

  • Running n8n with the execution process [own(default), main]:
  • Running n8n via [Docker, npm, n8n.cloud, desktop app]:

For the question about the database, I assume you are using SQLite (this is the default if you did not configure anything else.

@jan Just updated my post to answer those two things. As for the database, we’re using PostgreSQL. Thanks!

Thanks. In this case make sure to switch from “own” to “main” mode by setting the environment variable:

EXECUTIONS_PROCESS=main

That will increase the throughput a lot, as creating an own process for each execution causes a lot of overhead.

Generally will outstanding requests be lost, if n8n crashes unless you run it in scaling mode where it pushes everything to a queue. You can find more information about how to set it up here:
https://docs.n8n.io/hosting/scaling/

We are actually right now in the process of creating benchmarks. I hope we can publish them soon. Next to the results will it be a whole GitHub repository which will allow you to run your own benchmarks on your hardware with your own custom workflows. So you can then get numbers on your real world use-cases.

2 Likes

@jan Added that environment variable and seems to be a lot better now. Thank you! Will look into the scaling doc you linked.

1 Like

You are welcome. Have fun!