Hi everyone,
I’m currently working on a workflow where I process around 10,000–15,000 resumes. To handle this, I’m using batch processing (processing items one by one / in small chunks), but I’m running into a serious issue related to scalability.
Problem:
-
While the workflow is running, I sometimes get an “out of memory” error at the system/container level.
-
However, when I check the n8n execution UI, there is no error displayed inside the workflow execution.
-
The execution either:
My Setup:
-
n8n Version: 1.123.4
-
Deployment: Self-hosted via Docker
-
Processing Type: Large dataset (10k–15k items), batch-based processing
What I’ve Tried:
Still, the issue persists when the dataset grows large.
Hi @abhi_kanjia - Please could you relay your Docker compose file and the below command.
These should help you to locate further debugging
docker logs --tail 100 | grep -i “killed|oom|heap|memory”
There’s some tips/tricks with below variables for docker compose to manipulate the memory. You don’t need all just based on log output you may need to refer to one or more.
services:
n8n:
image: n8nio/n8n:1.123.4 # or latest
mem_limit: 4g # ← START HERE (try 4g → 8g if you have RAM)
# or: memswap_limit: 6g
environment:
- NODE_OPTIONS=--max-old-space-size=4096 # ← Critical: tells Node.js it can use more heap
- N8N_DEFAULT_BINARY_DATA_MODE=filesystem # ← Huge win for resumes/PDFs (stores binaries on disk instead of RAM)
# Optional but recommended for large runs:
- EXECUTIONS_DATA_SAVE_ON_SUCCESS=none # Don't store huge successful execution data
- EXECUTIONS_DATA_MAX_AGE=24 # Prune after 24h
- EXECUTIONS_DATA_PRUNE_MAX_COUNT=5000
- DB_SQLITE_VACUUM_ON_STARTUP=true # If using SQLite
Let me know how this goes here on stand by to assist
Hi @abhi_kanjia Welcome!
I guess according to your flow i recommend moving your looping logic into a sub workflow with execute subworkflow node so that the memory is freed once the sub flow execution is complete so there would not be any error related to that, and also i agree with @Jekylls with the docker customization.
1 Like
The Docker memory cap and sub-workflow suggestions above are correct starting points, but there’s one more thing that bites people hard on large batch workflows: n8n stores full execution data in memory while the workflow runs.
Processing 10-15k resumes means n8n is accumulating the data for every item in the current execution context until the workflow completes. Sub-workflows help here because each one clears its context on finish.
A few more things worth adding to what’s already been suggested:
Limit stored execution data
Set EXECUTIONS_DATA_MAX_AGE=1 and EXECUTIONS_DATA_PRUNE=true in your Docker env. This doesn’t reduce mid-run memory but keeps the database lean and reduces memory pressure from previous execution reads.
Use Queue Mode if you aren’t already
If you’re running standard mode, n8n handles all executions in the main process. Queue mode with a worker offloads execution to a separate process, so an OOM in the worker doesn’t take down your main instance:
N8N_EXECUTIONS_MODE=queue
N8N_QUEUE_BULL_REDIS_HOST=redis
The silent failure is a Docker OOM kill
When Docker hits the memory limit, it kills the process with SIGKILL — which is why you see no error in the n8n UI. The execution just stops mid-flight. Running docker events while your workflow runs will show oom events in real time if this is what’s happening.
For 10-15k items, I’d recommend batching at the workflow level: split your resume list into chunks of 500 before you even start the loop, run each chunk as a separate triggered execution rather than one giant loop. That keeps peak memory manageable.