Im using a main workflow and a subworkflow to avoid memory issues but the split in batches node is not processing all items it receives from the nocodb node. it received 4141 items from the nocodb node and spits them into batches of 90 but only executes 4 times before stopping.
images are me testing with a different batch count, when I try with 20 it executes 10 times then says “executed successfully”.
Hi @Trash, glad you figured it out, thanks so much for confirming!
I have 8 gb of ram allocated to my machine and n8n crashes when stored memory surpasses 500 mb.
Can you confirm how exactly you’re running n8n and how you measure memory consumption? You might need to adjust the
--max-old-space-size=SIZE setting as suggested here.
Ive encountered this issue again. Im running n8n with cloudron and measuring memory consumption with the graphs in settings.
here are my env.sh file configs
here is the memory usage graph. N8n crashes when it surpasses 500 mb
Can you make sure you’re on the latest version of n8n and share your server logs from just before the crash until the crash with debug logging enabled?
From a quick look, I can’t see n8n crashing (or starting up after a crash). However, I can see a “Maximum call stack size exceeded” error at 2023-11-09T17:55:27.748Z. Is this when your workflow stopped? If not, how exactly did you confirm that n8n itself crashed?