Describe the problem/error/question
I’ve noticed that inserting data into a PostgreSQL database using the PostgreSQL node in n8n can be quite slow. For example, inserting 1,000 records takes around 4 seconds, regardless of which batch option I choose (single, independent, or transaction).
After reviewing the behavior, it seems the current implementation performs extensive validation and checks, which significantly increases processing time.
To test an alternative, I used a Code node to build a single query like:
INSERT INTO table (columns) VALUES (valueN)
Then I executed this query through the PostgreSQL node. This approach reduced the insertion time from 4 seconds to about 500 ms. When testing with 200,000 records, the time dropped from roughly 10 minutes (even with batch splits of 1,000) to 30 seconds.
However, this workaround has side effects:
-
Increased CPU and RAM usage when running everything in a single workflow.
-
Using sub-workflows or webhooks creates unnecessary workflow history entries for what is essentially one operation.
Question:
Would you consider refactoring the PostgreSQL insert logic to make it more efficient? Optimizing this could:
-
Reduce overhead from excessive validation.
-
Improve performance for large inserts.
-
Keep everything within a single workflow without hacks.
This would make PostgreSQL inserts in n8n much more practical for high-volume data ingestion.
What is the error message (if any)?
Please share your workflow
Share the output returned by the last node
Information on your n8n setup
- n8n version: 2.2.6 (but vetsion is independent for postgresql node)
- Database (default: SQLite): postgresql
- n8n EXECUTIONS_PROCESS setting (default: own, main): queue
- Running n8n via (Docker, npm, n8n cloud, desktop app): docker
- Operating system: red-hat