I’ve been experiencing a sudden and persistent latency degradation in my Supabase nodes (n8n-nodes-base.supabase) with no changes to my workflow logic. I’m on the Starter plan and have had a support ticket open for 4 days with no response, so I’m hoping someone in the community has seen the same behavior.
What happened:
On March 3, 2026, between approximately 19:58 and 21:30 UTC-3, something changed. Executions before that window ran normally; executions after it are consistently ~50% slower on all Supabase nodes. A second degradation event happened by March 9, pushing the same nodes to ~3x their original duration.
Measured before/after (same workflow, same payload size):
NodeBeforeAfter (Mar 3)After (Mar 9)INSERT t_message (in)~8,100ms~12,300ms~23,700msINSERT t_message (out)~8,200ms~12,300ms~23,800msUPDATE t_customer~3,400ms~5,100ms~9,800msUPDATE session_data (JSONB)~19,700ms~29,900ms~58,100ms
What I’ve already ruled out:
No changes to the nodes 5.10–5.13 between the two executions (confirmed via workflowData snapshots inside each execution)
Payload sizes are nearly identical across all executions (~337–755 chars)
The degradation is proportional across ALL Supabase nodes simultaneously (~50% across the board), suggesting infrastructure, not a specific query
The workflow did receive new nodes in a separate section (8.11.x) around the inflection point, but those nodes have no connection to the affected section
My setup:
n8n cloud, Starter plan
Supabase (PostgreSQL + pgvector)
Workflow: WF_00_MAIN — ~135 nodes
All Supabase operations are standard INSERT/UPDATE via the built-in Supabase node
My suspicion:
This looks like a Supabase-side event (autovacuum, dead tuple bloat, connection pool saturation, or a maintenance window) that wasn’t communicated and hasn’t recovered. But since I can’t access Supabase slow query logs or pg_stat on the Starter plan, I can’t confirm it.
Questions:
Has anyone else on the Starter plan seen a sudden ~50% jump in Supabase node latency around this period?
Is there a known issue with Supabase node performance degradation over time (table growth, missing VACUUM)?
Any recommended way to diagnose this from within n8n without direct DB access?
Any help appreciated — the support ticket silence is frustrating for a production workflow. ![]()
Describe the problem/error/question
I’ve been experiencing a sudden and persistent latency degradation in my Supabase nodes (n8n-nodes-base.supabase) with no changes to my workflow logic. I’m on the Starter plan and have had a support ticket open for 4 days with no response, so I’m hoping someone in the community has seen the same behavior.
What happened:
On March 3, 2026, between approximately 19:58 and 21:30 UTC-3, something changed. Executions before that window ran normally; executions after it are consistently ~50% slower on all Supabase nodes. A second degradation event happened by March 9, pushing the same nodes to ~3x their original duration.
Measured before/after (same workflow, same payload size):
NodeBeforeAfter (Mar 3)After (Mar 9)INSERT t_message (in)~8,100ms~12,300ms~23,700msINSERT t_message (out)~8,200ms~12,300ms~23,800msUPDATE t_customer~3,400ms~5,100ms~9,800msUPDATE session_data (JSONB)~19,700ms~29,900ms~58,100ms
What I’ve already ruled out:
No changes to the nodes 5.10–5.13 between the two executions (confirmed via workflowData snapshots inside each execution)
Payload sizes are nearly identical across all executions (~337–755 chars)
The degradation is proportional across ALL Supabase nodes simultaneously (~50% across the board), suggesting infrastructure, not a specific query
The workflow did receive new nodes in a separate section (8.11.x) around the inflection point, but those nodes have no connection to the affected section
My setup:
n8n cloud, Starter plan
Supabase (PostgreSQL + pgvector)
Workflow: WF_00_MAIN — ~135 nodes
All Supabase operations are standard INSERT/UPDATE via the built-in Supabase node
My suspicion:
This looks like a Supabase-side event (autovacuum, dead tuple bloat, connection pool saturation, or a maintenance window) that wasn’t communicated and hasn’t recovered. But since I can’t access Supabase slow query logs or pg_stat on the Starter plan, I can’t confirm it.
Questions:
Has anyone else on the Starter plan seen a sudden ~50% jump in Supabase node latency around this period?
Is there a known issue with Supabase node performance degradation over time (table growth, missing VACUUM)?
Any recommended way to diagnose this from within n8n without direct DB access?
Any help appreciated — the support ticket silence is frustrating for a production workflow. ![]()
What is the error message (if any)?
Please share your workflow
(Select the nodes on your canvas and use the keyboard shortcuts CMD+C/CTRL+C and CMD+V/CTRL+V to copy and paste the workflow.)
Share the output returned by the last node
I’ve been experiencing a sudden and persistent latency degradation in my Supabase nodes (n8n-nodes-base.supabase) with no changes to my workflow logic. I’m on the Starter plan and have had a support ticket open for 4 days with no response, so I’m hoping someone in the community has seen the same behavior.
What happened:
On March 3, 2026, between approximately 19:58 and 21:30 UTC-3, something changed. Executions before that window ran normally; executions after it are consistently ~50% slower on all Supabase nodes. A second degradation event happened by March 9, pushing the same nodes to ~3x their original duration.
Measured before/after (same workflow, same payload size):
NodeBeforeAfter (Mar 3)After (Mar 9)INSERT t_message (in)~8,100ms~12,300ms~23,700msINSERT t_message (out)~8,200ms~12,300ms~23,800msUPDATE t_customer~3,400ms~5,100ms~9,800msUPDATE session_data (JSONB)~19,700ms~29,900ms~58,100ms
What I’ve already ruled out:
No changes to the nodes 5.10–5.13 between the two executions (confirmed via workflowData snapshots inside each execution)
Payload sizes are nearly identical across all executions (~337–755 chars)
The degradation is proportional across ALL Supabase nodes simultaneously (~50% across the board), suggesting infrastructure, not a specific query
The workflow did receive new nodes in a separate section (8.11.x) around the inflection point, but those nodes have no connection to the affected section
My setup:
n8n cloud, Starter plan
Supabase (PostgreSQL + pgvector)
Workflow: WF_00_MAIN — ~135 nodes
All Supabase operations are standard INSERT/UPDATE via the built-in Supabase node
My suspicion:
This looks like a Supabase-side event (autovacuum, dead tuple bloat, connection pool saturation, or a maintenance window) that wasn’t communicated and hasn’t recovered. But since I can’t access Supabase slow query logs or pg_stat on the Starter plan, I can’t confirm it.
Questions:
Has anyone else on the Starter plan seen a sudden ~50% jump in Supabase node latency around this period?
Is there a known issue with Supabase node performance degradation over time (table growth, missing VACUUM)?
Any recommended way to diagnose this from within n8n without direct DB access?
Any help appreciated — the support ticket silence is frustrating for a production workflow. ![]()
Information on your n8n setup
- n8n version:
- Database (default: SQLite):
- n8n EXECUTIONS_PROCESS setting (default: own, main):
- Running n8n via (Docker, npm, n8n cloud, desktop app):
- Operating system: