Error: MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 global:failed listeners added to [Queue]. MaxListeners is 10. Use emitter.setMaxListeners() to increase limit

Describe the problem/error/question

What is the error message (if any)?

Please share your workflow

Description of the Issue:*

I have several flows that schedule tasks in Cloud Tasks, send them to Cloud Run, which reads a spreadsheet, and then returns the data to my normalization and trigger flow in n8n.

However, I am encountering problems when scaling up. The n8n instance is not handling the scale as expected.

Infrastructure Setup:

  • We are running n8n on Kubernetes.

  • Our setup includes:

  • 1 Redis instance.

  • 1 PostgreSQL database.

  • Between 5 to 50 worker pods (auto-scaled).

  • 2 main pods.

Issue Details:

After spending considerable time adjusting the resources of these pods and enabling automatic scaling, we have encountered the following error in the leader pod of the main instance:


MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 global:failed listeners added to [Queue]. MaxListeners is 10. Use emitter.setMaxListeners() to increase limit

Information on your n8n setup

helm-n8n: helm-n8n.zip - Google Drive

Hey @Guilherme_Graham

Can you start by setting N8N_PROXY_HOPS on your main instance to match the number of reverse proxies you are using, usually this will be 1 and it will fix the warning around the x-forward header.

For the event emitter message I thought we had put in a fix for that so I would try an upgrade but I also suspect that if you are using up to 50 workers it is likely going to be resource related as well which is why it runs out of memory shortly after.

Can you try running just 5 workers and set the concurrency to something like 5 as a starting point and see if that works?

You also mentioned 2 main pods, I assume you are using the multi main feature of n8n with an enterprise license if this is the case feel free to email in and we can handle this there. If you are not using multi main I would not recommend 2 main instances as we don’t support running 2 at once without the feature as it can result in everything being duplicated.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.