I am facing the follwing error on my workflow with multiple HTTP Request nodes:
FATAL ERROR: Reached heap limit Allocation failed - JavaScript heap out of memory
Only one workflow is running on the server, activated by a webhook.
The workflow contains the following nodes and execution counts:
Requesting a Dataset of strings 300 times
Requesting a Image in Binary format 5000 times
Renaming that Image 5000 times
Uploading that Image to S3 5000 times
I and I already configured multiple SplitInBatch-Nodes for HTTP Request nodes as they are needed for the correct execution of the HTTP Request and also for keeping the workload lower.
The String Dataset is about 1500 lines of code which I already have shortened at the beginning to one line to only keep what is necessary to request the images.
When I manually set the requested images to about 2000 in GUI mode and manually activate the workflow it finishes, but of course not with all the data which it supposed to handle.
I am running n8n on Docker on a Hetzner Server with standard ENV Settings.
The Hardware of the Server is:
2 V-Cores
8 GB RAM
20 GB nVME SSD
This screenshot shows the load untill the error happens and the workflow is stopped:
I usually split up the workflow when working with large amounts of data.
So creating a workflow that handles one request batch and then returns nothing (make sure it doesn’t return the result data as this defeats the purpose of splitting it up)
Something like this:
Also, there should be an option to save binary data to disk. Instead of having it in ram. This should also help when working with binary data like those images.
I’m trying to prevent my subworkflow from returning anything. I have attempted to use the <No Operation, do nothing> block at the end of each branch within the subworkflow, but it still returns a value. What else should I do to resolve this issue?
Would have been better to start a new topic, but to give you a quick reply.
you can use a set node set it to keep only set and also set the option to run only once in the node options.
Another option if you use the split in batches node in the main workflow is to use the community node version of it and set it to use subworkflows and to clear the data before returning: