Problem with Loop - JavaScript heap out of memory

Describe the problem/error/question

The goal of my workflow is to search for XML files on Google Drive and merge them into a single output. The folder structure on the drive is non-uniform, so I used a slightly modified version of this solution: Sharing my first "hard" (working) workflow as a dev - Recursively get all sub-folders from a google drive folder to recursively find all nested folders. Then, from each folder, I extract the files I need and iterate over their IDs in another loop using a helper workflow, which returns the XML file content converted to JSON.

I manage to process about 21 out of 61 files before the process crashes.

N8N has 3GB of memory.

Could you please suggest possible solutions to this issue?
I’ve tried processing files in small batches (e.g., 5 at a time and saving the results to disk), but this also ends up overloading the memory.

What is the error message (if any)?

FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory

Please share your workflow

Main:

Get data from XML:

Get subfolders list:

Share the output returned by the last node

Information on your n8n setup

  • n8n version: 1.99.1
  • Database (default: SQLite): PostgreSQL
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app): Docker
  • Operating system: Linux
  1. Avoid keeping all XML content in memory
    Instead of merging in-memory inside n8n, stream each converted XML file (as JSON) directly to an external storage (e.g., S3, Supabase Storage, or local disk) immediately after it’s parsed.
  2. Use a database or temporary storage to collect partial results
    Store each parsed JSON blob with a file ID or timestamp. Don’t append to an array or keep anything in n8n context.
  3. Process one file at a time
    Add a Wait node between each iteration in your loop. Even 200ms delay between file downloads helps prevent memory buildup.
  4. At the end, run a second workflow to fetch and merge results from disk/database
    Keep this as a lightweight merge task, not part of the main recursive loop.
  5. Optional but recommended:
    Increase Docker memory limit:

yaml

CopyEdit

environment:
  - NODE_OPTIONS=--max-old-space-size=4096

You can follow this logic:

Folder Loop
└→ Call sub-workflow for each file:
→ Read file
→ Parse XML → JSON
→ Save JSON to S3 / disk / database
→ Do not return content to the main flow
└→ Wait 200ms
… repeat
At the end: separate workflow:
→ Read all JSON
→ Combine into a final result

The logic behind why this will work is:
Sub-workflow: Data is freed at the end of each execution, avoiding accumulation.
External storage: Avoids maintaining large structures in n8n’s internal memory.
Intentional pauses: Allows for Node.js garbage collection.
Increased memory: A larger heap allows for processing larger loads without errors.

Updated Sharing my first “hard” (working) workflow as a dev - Recursively get all sub-folders from a google drive folder for memory issues.

Feel free to share the workflow (if you are allowed to) if you use the new logic.

I honestly feel pure code implementation would be simpler here, you already are using code nodes. But maybe (like me) you were forced by client requirements to use n8n for the job haha.

Hi, and thank you for the suggestions.

If I understood you correctly, I’ve been processing the XML files and saving them one by one to Google Drive in JSON format. After that, I want to run a separate workflow that reads all these files and combines the results.

Could you please let me know if I’m doing everything correctly? Because even with this approach, I’m still getting an error related to insufficient memory.

You are self hosting, here are a few things you might research/consider:

  1. N8N_DEFAULT_BINARY_DATA_MODE=filesystem
  2. the download and extract from file should be done in a subworkflow, so when it ends it frees the memory. Also save it to disk
  3. use the Execute Command node to run a shell command for os-level concatenation once you got all the files on disk. Will use very little memory

I hope the final upload file node doesn’t consume the filesize in n8n memory.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.