Subworkflow does not release memory

Hi, I build a workflow to get all posts on a forum via an open API, and post them into a PostgreSQL database. The workflow quickly throws a Javascript out of memory error.

  • The API does not require any credential in case you want to test it
  • The main workflow only uses the Endpoint cursors as data
  • The subworkflow get / process / store all the data (and should clear memomy for the next batch)
  • The subworkflow handle data by batch of 25
  • For information our server has 10G of RAM allocated, the workflow seems to crash after around 7300 lines processed.

Note: This seems to be a smilar issue to Release memory in subworkflow - n8n cloud

Am I doing something wrong here or is there an other way to process large amount of data properly?

Main workflow

Subworkflow

Information on your n8n setup

  • n8n version: 0.221.2
  • Database: PostgreSQL
  • n8n EXECUTIONS_PROCESS setting: Queue
  • Running n8n via: Docker
  • Operating system: Ubuntu 18.04.5 LTS
1 Like

Hi @Eliott_Audry, I am so sorry for this. I tried manually running your workflow and was indeed able to see my memory consumption creeping up continuously stronger than expected, even when not using queue mode. I suspect this could be related to the Code node. Can you try adding a Set node at the end of your sub-workflow and verify if this reduces the memory consumption for you?

Something like this:

I am currently on iteration 328 with the memory usage being fairly consistent at around 480 MB:

2 Likes

Thank you @MutedJam !
I don’t really understand why, but this seems indeed to fix the issue : )
Very happy to be able to use that workflow finally.

1 Like

To be honest, I don’t fully understand this one either.

Glad to see it’s a “not just me” situation though, I shall add this possible memory leak to our bug tracker for a closer look and fix by the engineering team :slight_smile:

2 Likes

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.