Thanks @gualter I already read this documentation
The only I could do if to reduce memory consumption by splitting data, but I am not able to that because lines are linked by an id
If I split ont the wrong line, the final result will be be wrong
The real problem is not the available RAM in the server, but the stack affected to nodejs, in particular in docker version of n8n
That’s why a new parameter should be fine
If it is possible to transform the workflow to consume less memory, I will say yes!
But I have a file with more than 300 000 lines with 5 columns
Just the loading of the file takes 2mn…
You can already expand the size of memory that is usable. See the docs, on the bottom of the page.
There are ways to split the file by not reading all lines at once. Depends on the file of course. And you can set n8n to use the filesystem for binary data.