Release memory in subworkflow - n8n cloud

Hi,
I built a workflow on n8n cloud that transfer data from smartsheet (using the HTTP request node) to a database (I tried with both Postgres and Snowflake).

The workflow went in error due to memory. I can understand, even if here, the amount of data is not that big (18k rows, 5 columns).

I created a sub workflow called from my main workflow where I’m handling the pagination.
Ex: 18 calls of 1000 rows or 36 calls of 500 rows and so on.

The problem is that it’s not solving the issue, the workflow stops after around 12k rows, so I guess that the memory of each subworflow is not released ?

I also added a “Wait” node between each subworkflow execution, but it’s not solving the issue.

Am I missing something ?

The wait not will for not solve the problem, it will rather make it worst. So would remove that.

A sub-workflow will always “release” the memory. But you have to make sure to not send all the data back again from the sub-workflow into the parent-workflow else it defeats the purpose of the whole sub-workflow construct.
Meaning you have to make sure that the last node in your sub-workflow returns an empty item or just items with very little data. That can be done with a Code-Node or the Set-Node.

Hope that is helpful.

1 Like

Hi Jan, thanks for the quick reply. My subworflow only returns one line based on a code node (I saw some old posts where you advice that). That’s why I really don’t understand the behavior here. My main workflow is very “light” in terms of memory consumption.

My main workflow looks like this:


and the subworflow is :

And one page of 1000 records on smartsheet API is only 22 KB…

Hi @Jeremy_controlc.io, the response size you see in tools such as Postman does not translate directly to memory usage in n8n. n8n would, for example, keep multiple copies of the dataset in memory (one for each node in your workflow). You might want to check out the documentation page linked in the error message (Memory-related errors - n8n Documentation) for more information on what exactly contributes to memory consumption.

In order to better understand your specific problem it’d be great if you could share the JSON representations of workflows using which the problem can be reproduced (rather than screenshots). You can simply select a workflow on your n8n canvas and then press Ctrl+C to copy it (and then use Ctrl+V on the forum to paste it).

Sure, here it is for the main flow:

And the suworkflow is:

Also, i’ve upgraded the cloud env and now we don’t have that issue anymore (at least not for the moment)

I’m still having the issue of out of memory.
My subworkflow is only returning a {“output”:“ok”} json and the parent workflow is very very simple. So for me, the subworkflow is not releasing the memory once finished.

By any chance, is there any different behaviour when using a combination webhook/http request or a workflow trigger/run workflow setup ?

Big advantage for me about the workflow trigger/run workflow is the lower number of workflows executions counted and active workflows

And I forgot to mention that when monitoring memory usage on docker, memory is increasing linearly.

Hi @Jeremy_controlc.io, I am very sorry to hear the sub-workflow didn’t yield the expected result. I can’t think of many other angles for improving memory consumption further here to be honest.

The Code node in particular is memory heavy (it’s on the naughty list from the memory page in our documentation). So you could consider replacing them if possible (for example, in your sub-workflow use a Set node executing only once to prepare the result). Where it’s not possible you could consider merging them together (in your parent workflow, use one Code instead of three).

I’d be surprised if replacing the Execute Workflow node with HTTP Request/Webhook node has a noticeable effect, but it’s always worth testing.

1 Like

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.