When running a workflow treating about 2 x 16k items, n8n tells I have an out of memory error.
The image on my kubernetes server crash and restart.
I don’t understand if it is a real memory error or if kube is reloading the service because it could not reach him.
I tried upgrading the memory with the following environment variable :
NODE_ENV: production
NODE_OPTIONS: "--use-openssl-ca --trace-warnings --max-old-space-size=14336"
and settings memory for the kube image :
resources:
requests:
memory: 15Gi
limits:
memory: 15Gi
What is the error message (if any)?
On kube: Liveness probe failed: Get “http://10.2.0.218:5678/healthz”: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
On n8n : ERROR: Execution stopped at this node
n8n may have run out of memory while executing it. More context and tips on how to avoid this in the docs
Hi @Thibault_Granie, I am very sorry to hear you’re having trouble.
Processing larger datasets in n8n can indeed eat up a lot more memory than the dataset size itself. The exact requirements usage would, however, depend not just on the data, but also on the type and number of nodes you have in use for example.
As such it’s very hard to say what exactly is killing your pod specifically. If you suspect it could be k8s-specific logic, perhaps you want to simply test the problematic execution outside of your cluster using a test instance fired up using something like docker run -it --rm --name n8n -p 5678:5678 --memory="15360" n8nio/n8n:1.17.1?
It might be worth opening docker stats in parallel to get an idea of the memory consumption during your workflow execution.
I found a solution : basically it seems that it was kube issue : kube killed the pod because the pod was not responding to the liveness acknowledgment due too a too high charge.