Issue with Postgresql database

Good day!
I had some issue with my n8n/postgresql.
Yesterday I start getting errors in one of my workflows:
ERROR: Execution stopped at this node

“n8n may have run out of memory while executing it. More context and tips on how to avoid this in the docs

Also I check Postgresql container logs and got this:
duplicate key value violates unique constraint “PK_b21ace2e13596ccd87dc9bf4ea6”
2024-02-14 11:17:16.341 UTC [513009] DETAIL: Key (“webhookPath”, method)=(b7698ece-08e0-4275-8636-b90adfec5f59, POST) already exists.
2024-02-14 11:17:16.341 UTC [513009] STATEMENT: INSERT INTO “public”.“webhook_entity”(“workflowId”, “webhookPath”, “method”, “node”, “webhookId”, “pathLength”) VALUES ($1, $2, $3, $4, DEFAULT, DEFAULT)

b7698ece-08e0-4275-8636-b90adfec5f59 - with many variation of webhook path.

Information on your n8n setup

  • n8n version:1.18.2
  • EXECUTIONS_PROCESS=main
  • EXECUTIONS_PROCESS=queue
  • Postgres
  • Docker compose
  • Ubuntu

It looks like your topic is missing some important information. Could you provide the following if applicable.

  • n8n version:
  • Database (default: SQLite):
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app):
  • Operating system:

@n8n updated

hello @David_Semenenko

The error is pretty self-explained. At some point the n8n run out of memory. That may happen because of this workflow, or due to any other. You should check all your workflows for big amount of data stored in the workflow or execution. And tune the execution saving for each of the big flows of for the one which runs very often. The issue is more complex than just one flow.
Check this article for more details: Memory-related errors | n8n Docs

And take note that this error does not mean that this workflow was the root cause. It just means that during the workflow execution n8n got no free memory to work with.

Hi @barn4k
Thanks for response.

I think error happen exactly with this particular workflow. Bcs it’s happen after I run it and I have thousands of executions in hour. I am using n8n as self-hosting and I have enough memory there like 16gm ram and 100+gb of ssd I believe it’s enough but still something wrong.

Then try to reduce the amount of data you are receiving during the executions. At first, try to disable the “Save successful executions”

16GB of RAM could be enough, but it depends on the amount of data stored in each execution. Check with some memory monitoring tools how the memory is changing during the executions

1 Like

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.