I am unable to decompress a large zip file (1.6 GB). I am running n8n locally on node and it is stuck for 20 minutes eating up 32GB of RAM and almost all the CPU (got a mac studio max).
I am more than happy to split that in other zip files and create a sub workflow to process those, but I have no idea how to get the path of the downloaded archive from the previous step
split -b 100M /path/to/your/largefile.zip /path/to/destination/folder/prefix
How do you have n8n configured? If you have not changed the default binary handling mode to filesystem it may cause a bit of slowness and everything would be handled in memory.
There is also a good change that we could improve the performance of the decompress node as I don’t think we have touched it in a while.
If I had a large file to decompress through and I was running n8n from npm I would probably use the execute command node and pass it off to the OS to decompress using the tools it ships with as it may be quicker.
Thanks for your answer @Jon . I do use filesystem in my settings. You can try the workflow I am using yourself and you can see that in order to uncrompress 1.7GB takes about 25 minutes…