Odd Behavior - workflows stop before completing full batch without errors

I have a workflow split into 2 separate ones. Workflow 1 calls Workflow 2 after each batch (batch size is 1). Essentially it is fetching a URL from a Google Sheet one at at time (Workflow 1) then running some functions on the HTTP response and saving it to MongoDB (Workflow 2).

However, it only runs for 10-50 items and then randomly stops. My execution history does not show any incomplete or errored executions:

And from the execution history diagram of Workflow 1 it seem that the Split Into Batches node is not activating correct (no green number icon to show it has executed):

Container Logs are not timestamped but show the following (the memory errors may be related to a previous issue before the workflows were separated):

The session "lvlkfvtx59" is not registred.

The session "lvlkfvtx59" is not registred.

<--- Last few GCs --->

[23:0x55e79d28df80]   248863 ms: Scavenge 464.3 (486.3) -> 463.8 (490.3) MB, 13.7 / 0.0 ms  (average mu = 0.566, current mu = 0.402) allocation failure 

[23:0x55e79d28df80]   248899 ms: Scavenge 467.7 (490.3) -> 467.8 (490.3) MB, 4.4 / 0.0 ms  (average mu = 0.566, current mu = 0.402) allocation failure 

[23:0x55e79d28df80]   248907 ms: Scavenge 467.8 (490.3) -> 467.3 (494.3) MB, 8.9 / 0.0 ms  (average mu = 0.566, current mu = 0.402) allocation failure 

<--- JS stacktrace --->

FATAL ERROR: MarkCompactCollector: young object promotion failed Allocation failed - JavaScript heap out of memory

The session "lvlkfvtx59" is not registred.

Looks like n8n is crashing because it runs out of memory.

So there are two things you can do:

  1. Increase the amount of RAM that is available to n8n
  2. Decrease the amount of RAM required by for example making sure that data of the sub-workflow does not spill into the main workflow. That can be done by adding Set-Node to the total end that has the option “Keep Only Set” activated and no values set

Thanks. Option 2 worked for me - I added a Set (keep only set option enabled) node to the end of Workflow 1.

Edit - spoke too soon. This enabled the workflow to run much longer and process more items but it still crashed eventually. On the bright side we went from processing ~25 items to process ~150 in between each crash.

Ideally I would like workflow 1 to stop after each run. Then, when workflow 2 (integrated triggered) completes, it again triggers a new execution of workflow 1 but from the next item in the list.

For now I can run the workflow manually a few times to complete my work. I will try @harshil1712’s suggestion in the other thread when I have a larger number of items to work with.