Memory leak?

Describe the problem/error/question

A couple of days ago I found out that my n8n instance crashed and rebooted. Turns out it ran out of memory. I’m not sure how though… because I only have ~5 fairly simple workflows running (DB read/writes).

For context, the main workflow pings an Airtable DB every 60 seconds and updates individual records if necessary. Supporting workflow sends out email alerts when prompted by the main workflow

To diagnose, I set up netdata to track RAM and CPU usage. See below.

What is the error message (if any)?

Here you can see the memory leak before I manually restarted the instance


Please share your workflow

Let me know if the json would help—I’ll have to clean some of the sensitive info from the json

Share the output returned by the last node

N/A (let me know if there’s a log I can share if it’ll help)

Information on your n8n setup

  • n8n version: 1.38.1
  • Database (default: SQLite): SQLite
  • n8n EXECUTIONS_PROCESS setting (default: own, main): default
  • Running n8n via (Docker, npm, n8n cloud, desktop app): docker
  • Operating system: Ubuntu 22.04.4 LTS

Hi @bean

Might have overlooked it but how many (v)cpu cores do you have available?
With Docker there is some overhead and the garbage collection for the container doesn’t work properly if you have 1 cpu or less.

Hmm interesting, I wasn’t aware of that.

I have 1 vcpu. Do you think it worth testing to see if the leak happens if I install n8n natively on the server? Or is there a cronjob I can run?

I’m fairly new to hosting, containers etc, but willing to learn!

I would always recommend using Docker.
Not sure about any job you can run to fix it. Simply use more than 1 vcpu :slight_smile:

1 Like

Gotcha. Even though the 1vcpu is nowhere near capacity? Is that expected behavior?

Discourse won’t let me upload an image for some reason so here’s a link to a screenshot of cPU usage.

In any case, I’ll test with a 2vcpu container in the coming weeks and report back.

@BramKn Reporting back as promised :saluting_face:

I set up an instance on a 2vcpu server with more RAM and… still leaking.

Here’s a screenshot of the RAM usage. (it won’t let me upload an image for some reason)

Any ideas as to what’s going on? I read in this post that adding a Set block at the end of the flow might empty the memory after execution. So I’m testing that right now :crossed_fingers:

I am not seeing any leaking in my own workflows, do you have anything else installed on the server?

What are your workflows actually doing as well? What you see as ~5 simple workflows may actually be heavy on usage as it is.

Where did you see originally that it ran out of memory? The memory usage will grow until it hits a certain amount then internally it will sort itself out.

Have you tried setting up monitoring using the /metrics endpoint to see what the internals are doing?

1 Like

I’m tracking the n8n container’s RAM usage using netdata which is the only other “major” container on the server.

Here’s what the main workflow does:

  1. Fetch RSS feed
  2. Get the first item from the feed
  3. Compare the contents of this first item to a record in Supabase
  4. Perform a few text (formatting) replacements
  5. Pass feed content to another (worker) workflow and execute it
  6. Worker workflow (on the same server) makes an HTTP request with the info

This workflow runs every 60 seconds. The worker workflow runs maybe 2-4 times every hour. Everything else runs maybe 1-2 times a day.

I’m convinced the leak is happening because of either the RSS fetch node or the code node (return first item in RSS array). The container runs out of memory and reboots → repeat every ~20 hours

The storage use is also going up which makes me wonder if n8n is storing the RSS feed items in the DB somewhere. I’ve turned off logging for successful runs with EXECUTIONS_DATA_SAVE_ON_SUCCESS=none ENV setup so the executions shouldn’t be slowly increasing storage use.

Is there any way to check why n8n is using up server storage? Would it be better to stop using SQLite and set up a postgres server since I have ~60 executions/hour?

I’m new to all this but very eager to learn—thank you for all the help! :slight_smile:

Bean :beans:

Hey @bean,

The RSS node should be ok I have been using it for a while now but without seeing some logs it would be hard to say. The code node we know is heavy on usage as it creates a sandbox on every run so if you were passing in a bunch of items depending on the option being used it would use more memory.

With storage the RSS trigger only stores the datetime of the first item in the feed when the node runs, This datetime is compared against the current datetime to see if the item is new and needs to be picked up in the workflow.

Hey @Jon ,

Hmm makes sense. Does the code sandbox memory get freed up after the workflow ends? If not, that might be what’s eating up the memory…

Here’s the stats from the last 48 hours for the n8n container if it helps:

Do I need to manually enable logging? And where can I access it once enabled?


I might want to take a look at the SQLite db as well... do you know how I can go about doing that?

Thank you so much for your help btw—the n8n team is awesome :heart_hands:

Hey @bean,

The memory should always be freed up, In the last 48 hours how many times did you restart n8n as those memory graphs don’t look terrible also they do instantly increase twice which makes me think the instance is doing a lot.

I have done some digging through some of the cloud instances to look at the stats and I am not seeing anything that looks like a memory leak at the moment and the code node is one of the most used nodes.

I would recommend enabling the metrics endpoint and maybe monitoring the n8n application itself which could help.

Het @Jon ,

The instance is crashing and restarting ever ~24 hours. To test, I duplicated the main workflow and now the instance is restarting every ~12 hours

Here’s what the instance is doing:

  1. Poll 3 RSS feeds every 60 seconds
  2. Code node gets the 0 index item from the RSS feed
  3. Check if there’s a new item by comparing to a record from an external db. If no new item, do nothing
  4. If there is a new item, update an external DB and send an HTTP request or an email to myself

I’ll enable the metrics endpoint and monitoring on the app and report back.

Is there any way to tell the instance to free up memory to prevent crashing?

Additionally…

I suspect the workflow is saving all the RSS feed data (30 records every time) in memory during each run, and that’s what’s causing the memory bloat. Do you think so as well?

Should I try setting

N8N_DEFAULT_BINARY_DATA_MODE=filesystem

or

EXECUTIONS_DATA_PRUNE_MAX_COUNT=(something like 100)?

Its unclear if the pruning affects binary data stored in memory—could you please let me know? :slight_smile: