I’ve been trying to figure out the best way to queue executions so they don’t all execute at the same time and overload the server. For example, if I have 30 users on my website who all submit a form at the same time. Well, that’s 30 executions that will be executed at the same time and this can easily overload the server I’m running my automation on.
Is there a way to ensure that multiple workflows don’t get executed at the same time?
If my understanding of the way Workflows get executed is incorrect, please let me know. But from my understanding, if they are received at the same time, they all execute at the same time.
Thank you for your help!
when you are running a single instance of n8n all your workflow executions run in parallel if they are triggered at the same time. If you choose to use queue mode the workload would be distributed between n8n worker instances. Each worker can be started using the
--concurrency=n flag. By default n8n workers will process 10 concurrent jobs from the message queue (redis). This way you could control the number of jobs that are executed in parallel but your server would need additional ressources to host the redis message queue and additional workers.
The queue solution @marcus mentioned here will do the trick for you.
Also Setting your current environment to Main might solve it but I would recommend the queue solution. It is not to hard to set up with the documentation and works perfectly.
I set it up yesterday and tested it and It would easily receive 2k requests in a few seconds. No issues at all.
And then of course it starts processing them with the workers. as many at a time as you have workers.(and concurrency set)
Another option would be to have two workflows. One that pushes a message into RabbitMQ and another one that receives messages from it and processes them (configured to run only one at a time).
To add to Jan, RabbitMQ works great. So you can try this first, as it easy to set up.
It will however still get overloaded if too many webhooks come in at the same time. Had the same with Hubspot which send 10 webhooks per second, Sending 200-600 webhooks after eachother.
30 in one go shouln’t be too big of a deal though, so just RabbitMQ would do the trick probably.
The issue with hubspot was more that the server couldnt catch up because of the amount of webhooks over and over again.
Awesome, thank you everyone for the input! I’ll give RabbitMQ a shot.
For future reference, if there a step-by-step tutorial on how to setup Queue mode? I tried following the documentation but I feel like I was definitely missing something. I.E: The encryption key, wasn’t sure where to get that.
Also tagging @jan in this:
My Webflow Triggers (Webhooks) come in individually. Meaning that if 30 form submissions happen in one go, it will be 30 different executions. If I push it to RabbitMQ, that’s still 30 executions running to send RabbitMQ data.
Unless there is a way to somehow send information directly to RabbitMQ without n8n, then having a MQ trigger fetch the queued data then this approach can work, otherwise, unless of course I missed something, I don’t see how this could work
You did sadly just delete our question template instead of actually filling it, so can now just make an assumption (which is not good because it wastes unnecessary time).
Anyway, generally should n8n have no problem at all having 30 workflow executions start at the same time, esp. if they are as simple as receiving data from a webhook and sending it to RabbitMQ (and it has a reasonable amount of RAM, let’s say 1GB). That is however only the case if n8n runs in “main” mode. So my assumption is, that in your case it runs in “own” mode, and there the performance is honestly kind of horrible if compared (and it eats up a lot of RAM). I expect that you do not even have to use queue mode or RabbitMQ at all if it runs in “main” mode(but again, this is just an assumption as I do not have that piece of information)
My apologies, will do better next time.
To help understand my situation better, here are some specs:
Information on your n8n setup
n8n version: 0.197.1
Database you’re using (default: SQLite): mySQL
Running n8n with the execution process [own(default), main]: Own
Running n8n via [Docker, npm, n8n.cloud, desktop app]: Docker
I went ahead and added the EXECUTIONS_PROCESS=main in the yaml document, under the environment section.
It definitely looks like the RAM usage dropped drastically. However I’m not sure if that was because I just installed mySQL, or because I switched it to main.
Nonetheless, I will probably be looking into switching my execution mode to queue down the line.
Thank you for your help!
The switch to queue mode was fairly easy in Docker.
The documentation is clear enough to do it without a hitch
There is also a docker compose that could be helpful.
@itsalanlee Thanks for confirming my suspicion. Switching from
main will result in n8n not starting an own process for each workflow execution anymore. Meaning the execution will start immediately (instead of taking around 1 second) and it also means that there is no memory overhead (which usually is at around 100 MB per process I think). The throughput is for those reasons much higher. Do really believe that it solves your issues and nothing else has to be done.
Switching to queue mode can still make sense but mainly if you have long-running CPU-intensive workflows. But would really just give main mode a chance before investing time into setting up RabbitMQ or n8n queue mode.
Awesome, Thank you @jan for the explanation of own vs main, and @BramKn for providing resources for docker-compose queue mode(that github resource is really helpful!)
Appreciate you both for helping out!