Email Trigger (IMAP)

Hi all,

Describe the problem/error/question

Whenever I run a test on my IMAP Email Trigger, the test runs forever, and most of the time the server goes down, with this error:

Registered runner "JS Task Runner" (Hsdm0m9NjweG_oqZqNwgf)

<--- Last few GCs --->

[6:0x74d628357650] 85077 ms: Scavenge 1502.3 (1544.2) -> 1501.4 (1545.0) MB, 18.21 / 0.00 ms (average mu = 0.926, current mu = 0.732) allocation failure;
[6:0x74d628357650] 85104 ms: Scavenge 1503.0 (1545.0) -> 1502.6 (1555.2) MB, 10.40 / 0.00 ms (average mu = 0.926, current mu = 0.732) allocation failure;
[6:0x74d628357650] 85340 ms: Mark-Compact 1509.2 (1555.2) -> 1506.8 (1562.7) MB, 195.53 / 0.00 ms (average mu = 0.850, current mu = 0.493) allocation failure; scavenge might not succeed


<--- JS stacktrace --->

FATAL ERROR: Reached heap limit Allocation failed - JavaScript heap out of memory

If I send an email to the email address with the workflow activated, nothing happens…
→ My email address is hosted by OVH, and the configuration seems to be OK as it is connected

What is the error message (if any)?

Please share your workflow

Share the output returned by the last node

I don’t have any output returned as the first step is not working, even in test

Information on your n8n setup

  • n8n version: 1.83.2
  • Database (default: SQLite): SQLite
  • n8n EXECUTIONS_PROCESS setting (default: own, main): main
  • Running n8n via (Docker, npm, n8n cloud, desktop app): Docker
  • Operating system: Linux

I also have very oftenly ā€˜Connection lost’ on the top bar of n8n…

The heap limit is the maximum amount of memory a program like n8n (running on Node.js) can use to store data while it works.

How much RAM is available on the machine running n8n? That’ll help me figure out if the server itself is running low on memory.

If you search this forum for JavaScript heap out of memory, you’ll find plenty of threads covering this exact issue.
The problem is that some workflow on your instance is taking up so much memory that the node.js process runs out of memory.
You can address this by

  1. looking at what workflow might be causing this, and either optimizing it, or disabling it, or
  2. increasing the max available heap size by setting an env variable NODE_OPTIONS="--max-old-space-size=2048" to set the limit to 2GB
  3. setting a concurrency limit by setting an env variable N8N_CONCURRENCY_PRODUCTION_LIMIT to something like 5 to make sure that at no point more than 5 concurrent executions are running
1 Like

Hi Franz,

Thanks for your reply!

My hosting is a Hostinger one
Current plan: KVM 2 AmƩliorer
CPU: 2
Memory: 8 Go
Storage: 100 Go

Thanks for your reply, I’ve set the limit to 3Go, and still the same issue…

can you try setting it to NODE_OPTIONS="--max-old-space-size=6144" for 6GB, and see if that resolves the issue.

It’s not necessarily a great long-term solution, but it would help to know if the execution is taking a couple of GBs or memory, or if there is a bug that’s causing some kind of recursion and exhausting all memory.

1 Like

Yes, I’ll try that.
The fact is that I’ve just installed n8n, I don’t have any workflow running.

Just a Gmail credentials, IMAP credentials, and one test-workflow

And the current usage is:
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
ddc08bad62b1 n8n 80.07% 1.082GiB / 3GiB 36.05% 3.06GB / 25.6MB 99.8MB / 340kB 20

It seems way too high for something doing nothing no?

And the usage is constantly changing:

root@srv746591:~/n8n-docker# docker stats n8n --no-stream
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
ddc08bad62b1 n8n 1.68% 2.501GiB / 3GiB 83.37% 4.47GB / 36.2MB 99.8MB / 344kB 20
root@srv746591:~/n8n-docker# docker stats n8n --no-stream
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
ddc08bad62b1 n8n 6.78% 2.506GiB / 3GiB 83.53% 4.47GB / 36.3MB 99.8MB / 344kB 20

that’s definitely way too high for an n8n instance doing nothing.
n8n usually starts up with around 350MB, but after 15 seconds or so, the usage should go down to 200-220MB.

Do you have any active executions?

Not any active execution, I’ve even deleted everything from my instance

is this a standard n8n docker image? or a custom image?

also, does restarting the instance reduce the memory usage at all?

No, not at all, I’ve just tried restarting the image, it doesn’t work

Well, I’ve followed the tutorial from hostinger: the one under ā€˜How to install n8n on Ubuntu manually.-,How%20to%20install%20n8n%20on%20Ubuntu%20manually,-If%20you%E2%80%99re%20using)’

I’ve installed it using containerized installation with docker

Should I remove everything and try the n8n tutorial?
I’ve already done it a few time…

That’s pretty weird, I’ve changed the config for a docker-compose.yml file

And now everything seems to be back to normal:
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
16297030b5b3 n8n 32.97% 206.3MiB / 3GiB 6.72% 329kB / 1.62MB 224MB / 221MB 20

With the CPU moving up and down though.

I don’t know what happened, it’s really frustrating.
=> could that be the following: I had an IMAP trigger, which triggered for any email received and set to unread. It seems it went on the whole 7000 emails not read, and for each of them tried to mark as read etc.
I thought the trigger would only trigger when a new email arrives, not with the ā€˜old’ emails

I think the current implementation look for all ā€œUNSEENā€ emails, and once all of those are processed, it saves the ā€œlastMessageUidā€, and for all future purposes always fetches emails after that ā€œlastMessageUidā€.

1 Like

To avoid issues like this in the future, you could set the env variable N8N_CONCURRENCY_PRODUCTION_LIMIT to something like 10 to make sure that at no point there are more than 10 concurrent executions.
This should also help reduce the memory usage.

1 Like

Thanks a lot for your reply.

Makes sense, as you’ve already mentioned earlier.

Thanks so much for your reply.

I’m closing the topic as I’ve successfully reach normal metrics.
For anyone seeing this topic lately, the only error was that too many unwanted actions were performed. Instead of just being triggered by new incoming emails, the event was triggered for my 6’000 unread emails, which is too much.

2 Likes

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.