You can use pretty much the same approach and have your workflow return the smaller binary files produced by the “Move to file” node in the sub workflow back to the parent workflow. This potentially costs a lot memory though, so make sure to monitor this closely.
Hm, I can’t seem think of any way to do this for a file downloaded from an HTTP location, all approaches i can think of would start with fetching that one big file in the first step. You could do it outside of n8n itself through the Execute Command node and a tool like curl or wget, but it would still have to be downloaded.
Or are you perhaps downloading your CSV file from an SSH/SFTP server instead?
I don’t think AWS S3 files can be manipulated before downloading them, so I don’t think there’s any way around this.
The example workflow does not delete anything. It would, however, write the file into a /home/node/.n8n/temp/ folder, so to remove the file you could simply run rm /home/node/.n8n/temp/* through an Execute Command node at the end of your workflow to clear this temp folder. This will be easy if you use the new version of the Split In Batches node which has a “Done” output.
Hi @Gabriele_Bracciali, this could suggest you have very long lines in your file (or perhaps even a single very long line?) . You would need to adjust the splitting logic in the parent workflow in this case.
Which result are you seeing on the Count lines in file node? Is it more than 1 line? If so, you could try reducing the linesPerBatch number on the Prepare batches node in a first step (perhaps try 50 instead of 100 lines per batch).