Weird issue with Queue mode -> Huge Memory usage after workflow ended

We are having an issue when processing 10k records. the flow works fine, but then when it is done it causes crashes because of Memory usage. Without Queue mode enabled it is fine.
Below Is all from the same server I have updated the server to version 235 but results are the same for 225 and 228.
There also seems to be a dip before it peaks. and the executions are actually completed succesfully according to the executions tab.

Please let me know if you need any further info.

Queue Mode

1st run with limited data


2nd run with 2x the data


You can see that the normal usage is a bit higher as expected but now there is a huge spike after the workflow was done as marked in the image.
I do understand that the graph doesn’t tell the full story but it is quite worrying.

3rd run with a bit more data (normal load for this flow, excluding the rest of the flow at noOp)


So another graph that shows a bit more data as is expected and then another huge peak after the workflow was done. Execution logs show succesful workflows for all of the above
image

Regular mode

Run with the same data as 3rd from above, same server same verything except Queue mode disabled.


No issue at all, there is a jump but nowhere near as extreme as the one in Queue mode.

Server info:
AWS EC2
2vcpu
4Gb

We had a similar issue with Garbage collection with 1vcpu or less. So this could be the issue again but it seems very odd to me.

Information on your n8n setup

  • n8n version: 225 + 228 + 235
  • Database (default: SQLite): Postgres (amazon RDS)
  • n8n EXECUTIONS_PROCESS setting (default: own, main): Queue
  • Running n8n via (Docker, npm, n8n cloud, desktop app): Docker
  • Operating system: Docker Default image of n8n

By accident I found the fix.
adding EXECUTIONS_PROCESS=main fixes the issue.
I always thought main was default for queue mode, apparently we still need to add this to resolve this issue.
Hopefully the removal of this env variable and default use of main will also fix this issue.

To be clear it does still jump up in memory but not that extreme crashing the server. :slight_smile:

1 Like

Hi @BramKn :wave: I’m really sorry that you ran into this! While it seems like it’s resolved, I also wanted to give a quick ping to @krynble to see if this is expected behaviour :thinking:

1 Like

HI @EmeraldHerald

Yeah it is not completely resolved but atleast we can continue with the flows.
The jump is still quite High vs Regular mode as well.

ps. I have also deleted the Executions in the database to see if that works. sadly it didnt.

1 Like

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.