When reading alot or large files from disk the node either hangs or doesn’t finish.
What is the error message (if any)?
When trying to read 500 files with a total size of 500mb the logs are:
There was a problem running hook "workflowExecuteAfter" RangeError: Invalid string length
at Array.join (<anonymous>)
at stringify (/usr/local/lib/node_modules/n8n/node_modules/.pnpm/[email protected]/node_modules/flatted/cjs/index.js:78:23)
at ExecutionLifecycleHooks.<anonymous> (/usr/local/lib/node_modules/n8n/src/execution-lifecycle/execution-lifecycle-hooks.ts:230:29)
at ExecutionLifecycleHooks.runHook (/usr/local/lib/node_modules/n8n/node_modules/.pnpm/n8n-core@file+packages+core_@[email protected]_@[email protected]_5aee33ef851c7de341eb325c6a25e0ff/node_modules/n8n-core/src/execution-engine/execution-lifecycle-hooks.ts:120:28)
at /usr/local/lib/node_modules/n8n/node_modules/.pnpm/n8n-core@file+packages+core_@[email protected]_@[email protected]_5aee33ef851c7de341eb325c6a25e0ff/node_modules/n8n-core/src/execution-engine/workflow-execute.ts:2136:6
at /usr/local/lib/node_modules/n8n/node_modules/.pnpm/n8n-core@file+packages+core_@[email protected]_@[email protected]_5aee33ef851c7de341eb325c6a25e0ff/node_modules/n8n-core/src/execution-engine/workflow-execute.ts:2158:11
When retrying with half of the files (262 files - 150 mb total)
It returns 260 items but never finishes the node:
It returns the following error:
PayloadTooLargeError: request entity too large
at readStream (/usr/local/lib/node_modules/n8n/node_modules/.pnpm/[email protected]/node_modules/raw-body/index.js:163:17)
at executor (/usr/local/lib/node_modules/n8n/node_modules/.pnpm/[email protected]/node_modules/raw-body/index.js:120:5)
at new Promise (<anonymous>)
at getRawBody (/usr/local/lib/node_modules/n8n/node_modules/.pnpm/[email protected]/node_modules/raw-body/index.js:119:10)
at IncomingMessage.req.readRawBody (/usr/local/lib/node_modules/n8n/src/middlewares/body-parser.ts:40:34)
at parseBody (/usr/local/lib/node_modules/n8n/src/middlewares/body-parser.ts:52:12)
at bodyParser (/usr/local/lib/node_modules/n8n/src/middlewares/body-parser.ts:76:17)
at Layer.handleRequest (/usr/local/lib/node_modules/n8n/node_modules/.pnpm/[email protected]/node_modules/router/lib/layer.js:152:17)
at trimPrefix (/usr/local/lib/node_modules/n8n/node_modules/.pnpm/[email protected]/node_modules/router/index.js:342:13)
at /usr/local/lib/node_modules/n8n/node_modules/.pnpm/[email protected]/node_modules/router/index.js:297:9
at processParams (/usr/local/lib/node_modules/n8n/node_modules/.pnpm/[email protected]/node_modules/router/index.js:582:12)
at next (/usr/local/lib/node_modules/n8n/node_modules/.pnpm/[email protected]/node_modules/router/index.js:291:5)
at /usr/local/lib/node_modules/n8n/src/abstract-server.ts:251:11
at Layer.handleRequest (/usr/local/lib/node_modules/n8n/node_modules/.pnpm/[email protected]/node_modules/router/lib/layer.js:152:17)
at trimPrefix (/usr/local/lib/node_modules/n8n/node_modules/.pnpm/[email protected]/node_modules/router/index.js:342:13)
at /usr/local/lib/node_modules/n8n/node_modules/.pnpm/[email protected]/node_modules/router/index.js:297:9
With 148 items and 94mb total it successfully parses the folder.
What is happening here, are there too many files or should you first gather the total filesize and break before 100mb?
Please share your workflow
(Select the nodes on your canvas and use the keyboard shortcuts CMD+C/CTRL+C and CMD+V/CTRL+V to copy and paste the workflow.)
What seems to be happening is that you’ve hit two different limits in n8n’s execution engine and API layer:
RangeError: Invalid string length
Triggered when n8n tries to serialize the whole execution result (all 500MB of items) into JSON for saving and/or sending back to the Editor UI. This is not about the HTTP request itself, but about memory/serialization limits in Node.js and flatted.
PayloadTooLargeError: request entity too large
This happens when the Editor UI tries to send or fetch the execution data through the API. The default Express body parser limit is 16 MB in n8n. Once your execution data exceeds this, you can’t push/pull it through the REST API.
How we can address it:
Split processing with SplitInBatches
After your “Read Files” step, insert a SplitInBatches node. That way each execution chunk is small (< a few MB) and won’t crash serialization.
Read Files -> SplitInBatches (e.g. 20 files) -> Process -> Loop back
Run headless (outside UI) for large runs to avoid WebSocket disconnects.
You can trigger the workflow by a webhook or by a cron node, to avoid having the UI opened and crashing it with too much data loaded into memory.
Let me know if any of these help alleviate things.
The problem already starts at read files; it will never process a larger fileset;
Eventually i ended up with this:
I input the project number, the project folder path is set with edit fields; I retrieve the filelist of the folder with “execute command“/command line on host; then parse the stdout it gives and loop over the items since read from disk can’t be set to parse inputs one by one;
Performance is abysmal but at least it’s working.
Edit: Okay never mind i still gets stuck in the loop after item 424; the console output for the n8n container still says: There was a problem running hook "workflowExecuteAfter" RangeError: Invalid string length
I also added these environment variables but it didn’t change anything:
NODE_OPTIONS: --max_old_space_size=10240
N8N_PAYLOAD_SIZE_MAX: 8192
N8N_FORMDATA_FILE_SIZE_MAX: 8192
Am i missing something obvious? Should i split the first 100 items to different branches?
I’ve put the read files in a separate node. Same result.
I’ve put the loop over items and read file in a separate node. Here it properly fails with a message.
I think it wants to join or concatenate the JSON outputs, maybe also the binary strings idk, and then it concludes with “my string is too long“. It just won’t work for large datasets.
Is my case that extreme? It’s 500 emails. That sound like peanuts for a modern machine.
Any dev willing to shed some light on this? @n8n_Team
I just did some investigation and it seems there is a bug in n8n, that tries to send the binary data, so the file content to the UI. I created an internal issue to track this and will link this thread to it.
What might work in the mean time, is that you try to keep the file handling in a separate subworkflow,. It would be important to try to set the mode for the Execute Workflow node for the subworkflow execution to `Run once for each item`. It can still happen that your instance will run out of memory, since by default n8n keeps binary data in memory. ( Binary data environment variables | n8n Docs ). Do you have an option to look at the memory consumption of your n8n instance?
I’ll update this thread with progress on the internal issue.
This is after a reboot of the container. it starts at 350mb; climbs to 6ish gb and then stops with the logs stating:
There was a problem running hook "workflowExecuteAfter" RangeError: Invalid string length
at Array.join (<anonymous>)
at stringify (/usr/local/lib/node_modules/n8n/node_modules/.pnpm/[email protected]/node_modules/flatted/cjs/index.js:78:23)
at ExecutionLifecycleHooks.<anonymous> (/usr/local/lib/node_modules/n8n/src/execution-lifecycle/execution-lifecycle-hooks.ts:230:29)
at ExecutionLifecycleHooks.runHook (/usr/local/lib/node_modules/n8n/node_modules/.pnpm/n8n-core@file+packages+core_@[email protected]_@[email protected]_5aee33ef851c7de341eb325c6a25e0ff/node_modules/n8n-core/src/execution-engine/execution-lifecycle-hooks.ts:120:28)
at /usr/local/lib/node_modules/n8n/node_modules/.pnpm/n8n-core@file+packages+core_@[email protected]_@[email protected]_5aee33ef851c7de341eb325c6a25e0ff/node_modules/n8n-core/src/execution-engine/workflow-execute.ts:2136:6
at /usr/local/lib/node_modules/n8n/node_modules/.pnpm/n8n-core@file+packages+core_@[email protected]_@[email protected]_5aee33ef851c7de341eb325c6a25e0ff/node_modules/n8n-core/src/execution-engine/workflow-execute.ts:2158:11
The machine has 128gb RAM available so at least that shouldn’t be a problem.
I also tried with the environment variable:
N8N_DEFAULT_BINARY_DATA_MODE: filesystem
But this doesn’t help the problem since it’s the string it tries to create is just too long.
It filters out per 100 items, this filtered results then completes the whole pipeline before the next 100 items get processed.
Here you can see all 529 items are processed before it would be put in DB. All entries have valid outputs.
But the execution never finishes due to “Invalid String length“ as somewhere in the pipeline string have to be joined. The OpenAI chat model node keeps the red circle so it’s probably there the json’s get joined into something too long.
The setup I have below is working. All subroutines are duplicates of the chain above. This finishes without problems. The routines need to be separate otherwise the JSON messages between the nodes get joined into a too long string and you will be stuck.
It doesn’t work if you keep the duplicate chains in this workflow. It needs to be a subroutine and it needs to be separate subroutines.
Again, if anyone has a better way of working around this problem i would love to know. I’d suggest an option in the Batch Processing node to isolate all batches in some form without this joining it’s strings.
As an automation i think this look awful and must surely be avoidable.
If anyone has any suggestions i’m very willing to try it.
I see that for large files typically you would want to turn off executions from saving data, to avoid serialization of huge strings and crashes. I recommend trying this with the version of the flow that previously threw errors.
Let me know if you do, I’m curious as if this will alleviate things.
Thanks for mentioning it. The workaround flow i had in my previous post still works.
But it still won’t work with a regular loop or with filtered results without separate subflows:
I used these settings :
EXECUTIONS_DATA_SAVE_MANUAL_EXECUTIONS: false
EXECUTIONS_DATA_SAVE_ON_PROGRESS: false
EXECUTIONS_DATA_SAVE_ON_SUCCESS: none
EXECUTIONS_DATA_SAVE_ON_ERROR: none
It caused it to crash because of OOM of the javascript heap:
I had it set to:
NODE_OPTIONS: --max_old_space_size=1024
so i changed it to:
NODE_OPTIONS: --max_old_space_size=102400
Now it just hangs with the same problem as always in the logs: There was a problem running hook "workflowExecuteAfter" RangeError: Invalid string length
I see, so this is a limit we must accept. It’s a Node.js limit and the workflow simply serializes too much data at once, exceeding it, regardless of whether it persists it in the execution data later on.
So the workarounds would be either optimizing memory as you do - via splitting the workflow,
OR
Storing metadata about the files but deleting the binary data as soon as it’s no longer needed (e.g. in a code node). Add batching to all that to avoid reading all files at once as well.
That’s what I got right now, hope it helps at least a bit.
Hello @HenkieTenkie62, could you please check whether setting the N8N_DEFAULT_BINARY_DATA_MODE environment variable to filesystem or database(on n8n v2) resolves the issue?
Also, please add the n8n version you are using and the workflow(s) that can be used to reproduce the issue.
This problem arose with v1.116.1. Yes i had already tries setting it to N8N_DEFAULT_BINARY_DATA_MODE: filesystem to no avail.
This is part of the workflow:
Now, with V2.4.4 regardless of the binary_data mode database or default, it completes the routine if i leave out a custom node. Here it completes with 1000+ files so this is excellent :