Hi!
We are giving a try with n8n because it’s extremely user friendly.
But we are at a point which it could be a no-go for this adoption. I explain…
We have a message broker, for example RabbitMQ, receiving messages (hundreds or thousands) in a specific topic from multiple apps, in a standardized format. All’s fine with this.
Now we add n8n to process these messages using RabbitMQ Trigger and it is a caos, the n8n simply dies because the server resources are immediately reached.
First attempt to solve this was add a line in RabbitMQTrigger.node.js:
Ok, with this tweak the n8n processes the things and apparently does not die but if we see the execution list it is insane the processes at the same time.
About scaling the n8n and using redis, basically it is the same, we cannot limit workers at the same time (I thing).
For you, using this scenario (thousands of messages), what is the best approach to solve the problem?
Well no, my function below doesn’t works properly. You have my tweaks in bold.
Oh god, a controllable consumer will give to a lot of people the solutions for many problems!
I had exactly the same complain as you.
The RabbitMQ trigger will try to initialize the same amount of processes as the amount of message in the queue, because of unlimited prefetch.
It will consume quite lots of memory, more than 10Gb in my case.
In the end, the container will be killed because of out of memory.
But I find kind of workaround later on that n8n could be implemented with execution mode = main.
Under this mode, all executions will be done using the main process.
It will explicitly lower down the resource usage(< 300Mb) and increase the performance(around 100 messages per second).
It really solve the pain point for me.
Maybe you could give a try on that.
More detail about the execution-modes and Pros/Cons you might be interested: