Can I get some examples of common workflows where OOM occurs?

Hello, I’m getting a lot of help from the forum.
We’re using n8n hosting, and sometimes there’s a phenomenon where the pod is restarted by encountering OOM.

I think the reason is that

  1. Customized RabbitMQ Trigger
  2. Response size for HTTP requests to send emails (approximately 800 KB per request)

We haven’t found exactly which workflow causes OOM directly.

If I set noAck: true in the trigger of RabbitMQ, will the threads available in n8n consume the following messages additionally before the workflow ends?

Before the pod ended, the slow query execution time increased from 5000 ms to 18000 ms, and then it ended with OOM.

Thanks a lot n8n Team!

Hey @Hakyoung,

Can you share a bit more information on this one? You can get OOM if you are working with a lot of items so it could just be that you have a workflow trying to handle too much. Without seeing the data or the workflows it is hard to say though.

Do you have any patterns yet for when the instance drops out that would maybe help track down which workflow is causing an issue, Running n8n with debug logging enabled would also possibly help track down what is going on.

2 Likes