Trigger Google Drive - JavaScript heap out of memory

Describe the problem/error/question

I’m having trouble with the Google Drive Trigger. When I click “save” or “activate” in Wordflow, it just keeps running and doesn’t show that it’s saved. Then I refresh the page and it actually saved, but without displaying the message. If I leave it like this, the “save” or “activate” button keeps running for a while.
Checking my CPU and RAM with the “ctop” command in the terminal, I see that it’s going over the limit to try to save the workflow with this trigger, but if I remove it from the screen and add a webhook or manual trigger, it saves normally. The container ends up falling, giving (exited - code 134)

Checking the logs:
<— Last few GCs —>

[7:0x7f121dc436a0] 827151 ms: Mark-Compact (reduce) 510.1 (521.6) → 508.2 (520.9) MB, 790.56 / 0.08 ms (average mu = 0.155, current mu = 0.103) allocation failure; scavenge might not succeed

[7:0x7f121dc436a0] 827984 ms: Mark-Compact (reduce) 510.3 (521.8) → 508.5 (521.1) MB, 813.74 / 0.02 ms (average mu = 0.094, current mu = 0.024) allocation failure; scavenge might not succeed

<— JS stacktrace —>

FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory

----- Native stack trace -----

I always leave the Editor limit at 1GB for my Stack, and it has always been successful, but when I tried to use this trigger, it worked at first, but then, on the same day, these problems started to appear.

The editor even gave this message once:
Problem activating workflow
The following error occurred on workflow activation:
There was a problem activating the workflow: “The service was not able to process your request”

Information on your n8n setup

  • n8n version: 1.90.2
  • Database (default: SQLite): Postgres
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app): Docker Swarm
  • Operating system: Debian 12

Hey @brunodevx hope all is well.

Try to allocate more memory to your instance or increase the Node.js heap size by setting the environment variable:

NODE_OPTIONS=--max-old-space-size=2048

(replace 2048 with the desired amount in MB).

See this doc.

There are obviously step that can be taken inside the workflow itself too, like

  • splitting data into smaller chunks
  • moving loops to sub-workflows
  • avoiding heavy Code nodes
  • avoiding running the wf manually

Another thing you probably want to do is update your instance to the latest stable version.

Hello, thank you very much for your feedback.

Regarding memory consumption, it’s strange because I monitor my VPS every day.

My VPS has 16GB of memory, but I limit each stack (admin, webhook, worker) to 1GB each. Looking at the monitoring, I know that in total, my 40 active workflows only use 480 to 540MB without going much beyond that.

However, simply adding the Google Drive trigger to check for updates to a file specified by ID simply exceeded the 1GB limit and crashed the N8N container.

Something is happening that is overloading the workflow with this trigger.

And regarding the version, I’m actually about to update, but I’m still analyzing the impact of this change on my operation.

I don’t know if this bug has been fixed in a new version, but I haven’t seen anything like it from the Github release logs.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.