On premise out of memory issue (

Describe the problem/error/question

When running a workflow treating about 2 x 16k items, n8n tells I have an out of memory error.
The image on my kubernetes server crash and restart.

I don’t understand if it is a real memory error or if kube is reloading the service because it could not reach him.
I tried upgrading the memory with the following environment variable :

NODE_ENV: production
NODE_OPTIONS: "--use-openssl-ca --trace-warnings --max-old-space-size=14336"

and settings memory for the kube image :
resources:
requests:
memory: 15Gi
limits:
memory: 15Gi

What is the error message (if any)?

On kube: Liveness probe failed: Get “http://10.2.0.218:5678/healthz”: context deadline exceeded (Client.Timeout exceeded while awaiting headers)

On n8n : ERROR: Execution stopped at this node

n8n may have run out of memory while executing it. More context and tips on how to avoid this in the docs

Information on your n8n setup

  • **n8n version: 1.17.1
  • **Database (default: SQLite): Postgresql
  • Running n8n via Docker
  • Operating system: Linux

It looks like your topic is missing some important information. Could you provide the following if applicable.

  • n8n version:
  • Database (default: SQLite):
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app):
  • Operating system:

Sorry,I Though it was provided, my bad.

Hi @Thibault_Granie, I am very sorry to hear you’re having trouble.

Processing larger datasets in n8n can indeed eat up a lot more memory than the dataset size itself. The exact requirements usage would, however, depend not just on the data, but also on the type and number of nodes you have in use for example.

As such it’s very hard to say what exactly is killing your pod specifically. If you suspect it could be k8s-specific logic, perhaps you want to simply test the problematic execution outside of your cluster using a test instance fired up using something like docker run -it --rm --name n8n -p 5678:5678 --memory="15360" n8nio/n8n:1.17.1?

It might be worth opening docker stats in parallel to get an idea of the memory consumption during your workflow execution.

Hi @MutedJam

Thanks for your reply.

I found a solution : basically it seems that it was kube issue : kube killed the pod because the pod was not responding to the liveness acknowledgment due too a too high charge.

I changed the parameter :

  • livenessProbe:
    • timeoutSeconds: 60

And it works.

1 Like

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.