Issue description
I’m having an issue with a workflow called “Evolução de Clientes” that processes large datasets from MSSQL and generates Excel reports using a Code node (ExcelJS).
The workflow runs fine with small or medium datasets (up to around 4,000 rows), but when it tries to process larger datasets (10k–40k+ records), the n8n Docker container crashes — it doesn’t reach the Excel generation step.
There’s no error message in n8n itself — the container simply stops and restarts due to an Out of Memory (OOM) event in Docker.
I’ve already tried pagination, memory allocation adjustments, and workflow optimizations, but the issue persists.
Error details
There is no n8n error message.
The container dies and restarts with this message in Docker logs:
Killed
Out of memory: Kill process 1321 (node) score 937 or sacrifice child
Workflow summary
Here’s a simplified version of the flow (I can share the full JSON if needed):
-
Microsoft SQL → fetch data
-
Code node → generate data per coordinator
-
Email → send notification
-
Loop per Coordinator
-
Execute Query (MSSQL with pagination logic using OFFSET/FETCH)
-
Merge results → Code node (aggregates all pages)
-
Generate Excel (ExcelJS via Code node)
-
Send Email with Excel attachment
When the dataset is small (under 5k rows), everything works — the Excel file is generated correctly.
When the dataset is large (e.g., 40k–50k rows), the container crashes before completing the query merge or Excel generation steps.
Environment details
n8n version: 1.109.2 (Self-hosted)
Database: PostgreSQL
Execution mode: main
Deployment: Docker
OS: Ubuntu 22.04 LTS
Environment variables:
NODE_OPTIONS=--max-old-space-size=8192
N8N_ENABLE_V8_CODE_NODE=true
NODE_FUNCTION_ALLOW_EXTERNAL=exceljs,bcryptjs
NODE_FUNCTION_ALLOW_BUILTIN=fs,crypto,util
N8N_REDIS_CACHE_ENABLED=true
Container memory: 11.64 GB total
Memory usage before crash: ~1.5 GB (then OOM when handling ~42k rows)
Additional context
I’ve already:
-
Implemented SQL pagination (
OFFSET/FETCH NEXT 2000 ROWS) -
Optimized merge logic between pages
-
Increased Node.js heap size (via
--max-old-space-size=8192) -
Tried Redis caching for input/output management
Even after all that, the issue still occurs — the container crashes without warning or error logs, and I have to manually restart it.
This isn’t isolated to this workflow — I’ve observed similar behavior in other flows that handle large SQL result sets or heavy JSON merges.
What I’d like to know
-
Is there a recommended way to handle large data sets (e.g. streaming, chunking, or buffering) inside n8n Code nodes?
-
Could this be related to how n8n buffers data internally between nodes?
-
Or am I missing some Docker or n8n configuration to handle high-memory workflows properly?
