Performance Issues with PostgreSQL Node Inserts – Possible Refactoring?

Describe the problem/error/question

I’ve noticed that inserting data into a PostgreSQL database using the PostgreSQL node in n8n can be quite slow. For example, inserting 1,000 records takes around 4 seconds, regardless of which batch option I choose (single, independent, or transaction).

After reviewing the behavior, it seems the current implementation performs extensive validation and checks, which significantly increases processing time.

To test an alternative, I used a Code node to build a single query like:
INSERT INTO table (columns) VALUES (valueN)

Then I executed this query through the PostgreSQL node. This approach reduced the insertion time from 4 seconds to about 500 ms. When testing with 200,000 records, the time dropped from roughly 10 minutes (even with batch splits of 1,000) to 30 seconds.

However, this workaround has side effects:

  1. Increased CPU and RAM usage when running everything in a single workflow.

  2. Using sub-workflows or webhooks creates unnecessary workflow history entries for what is essentially one operation.

Question:
Would you consider refactoring the PostgreSQL insert logic to make it more efficient? Optimizing this could:

  • Reduce overhead from excessive validation.

  • Improve performance for large inserts.

  • Keep everything within a single workflow without hacks.

This would make PostgreSQL inserts in n8n much more practical for high-volume data ingestion.

What is the error message (if any)?

Please share your workflow

Share the output returned by the last node

Information on your n8n setup

  • n8n version: 2.2.6 (but vetsion is independent for postgresql node)
  • Database (default: SQLite): postgresql
  • n8n EXECUTIONS_PROCESS setting (default: own, main): queue
  • Running n8n via (Docker, npm, n8n cloud, desktop app): docker
  • Operating system: red-hat
1 Like

Hi @Nifemi_Emmanuel,

thanks for suggestions. If you see the workflow i shared, that is exactly what I’m doing….but instead of keeping the workflow clean without custom Code, much better option would be to enhance the Postgres node.

The solution with custom Code to create one query is not always so forward, as the query has it’s length limitation, so usually I have Loop For Each → Aggregation (by 10000) → Code (Query for 10000 items)

1 Like

Thanks for info, I was also trying to the another workaround for large SQL insert, to create CSV file, copy it directly to the SQL server and run COPY FROM CSV directly in the server, BUT this works only when you have full access to the SQL server, which in most cases I do not have.

Is there any option or channel where I can put this as feature req?

1 Like

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.