Supabase nodes taking 3x longer with no code changes — Starter plan, 4 days open ticket with no response

I’ve been experiencing a sudden and persistent latency degradation in my Supabase nodes (n8n-nodes-base.supabase) with no changes to my workflow logic. I’m on the Starter plan and have had a support ticket open for 4 days with no response, so I’m hoping someone in the community has seen the same behavior.
What happened:
On March 3, 2026, between approximately 19:58 and 21:30 UTC-3, something changed. Executions before that window ran normally; executions after it are consistently ~50% slower on all Supabase nodes. A second degradation event happened by March 9, pushing the same nodes to ~3x their original duration.
Measured before/after (same workflow, same payload size):
NodeBeforeAfter (Mar 3)After (Mar 9)INSERT t_message (in)~8,100ms~12,300ms~23,700msINSERT t_message (out)~8,200ms~12,300ms~23,800msUPDATE t_customer~3,400ms~5,100ms~9,800msUPDATE session_data (JSONB)~19,700ms~29,900ms~58,100ms
What I’ve already ruled out:
No changes to the nodes 5.10–5.13 between the two executions (confirmed via workflowData snapshots inside each execution)
Payload sizes are nearly identical across all executions (~337–755 chars)
The degradation is proportional across ALL Supabase nodes simultaneously (~50% across the board), suggesting infrastructure, not a specific query
The workflow did receive new nodes in a separate section (8.11.x) around the inflection point, but those nodes have no connection to the affected section
My setup:
n8n cloud, Starter plan
Supabase (PostgreSQL + pgvector)
Workflow: WF_00_MAIN — ~135 nodes
All Supabase operations are standard INSERT/UPDATE via the built-in Supabase node
My suspicion:
This looks like a Supabase-side event (autovacuum, dead tuple bloat, connection pool saturation, or a maintenance window) that wasn’t communicated and hasn’t recovered. But since I can’t access Supabase slow query logs or pg_stat on the Starter plan, I can’t confirm it.
Questions:
Has anyone else on the Starter plan seen a sudden ~50% jump in Supabase node latency around this period?
Is there a known issue with Supabase node performance degradation over time (table growth, missing VACUUM)?
Any recommended way to diagnose this from within n8n without direct DB access?
Any help appreciated — the support ticket silence is frustrating for a production workflow. :folded_hands:

Describe the problem/error/question

I’ve been experiencing a sudden and persistent latency degradation in my Supabase nodes (n8n-nodes-base.supabase) with no changes to my workflow logic. I’m on the Starter plan and have had a support ticket open for 4 days with no response, so I’m hoping someone in the community has seen the same behavior.
What happened:
On March 3, 2026, between approximately 19:58 and 21:30 UTC-3, something changed. Executions before that window ran normally; executions after it are consistently ~50% slower on all Supabase nodes. A second degradation event happened by March 9, pushing the same nodes to ~3x their original duration.
Measured before/after (same workflow, same payload size):
NodeBeforeAfter (Mar 3)After (Mar 9)INSERT t_message (in)~8,100ms~12,300ms~23,700msINSERT t_message (out)~8,200ms~12,300ms~23,800msUPDATE t_customer~3,400ms~5,100ms~9,800msUPDATE session_data (JSONB)~19,700ms~29,900ms~58,100ms
What I’ve already ruled out:
No changes to the nodes 5.10–5.13 between the two executions (confirmed via workflowData snapshots inside each execution)
Payload sizes are nearly identical across all executions (~337–755 chars)
The degradation is proportional across ALL Supabase nodes simultaneously (~50% across the board), suggesting infrastructure, not a specific query
The workflow did receive new nodes in a separate section (8.11.x) around the inflection point, but those nodes have no connection to the affected section
My setup:
n8n cloud, Starter plan
Supabase (PostgreSQL + pgvector)
Workflow: WF_00_MAIN — ~135 nodes
All Supabase operations are standard INSERT/UPDATE via the built-in Supabase node
My suspicion:
This looks like a Supabase-side event (autovacuum, dead tuple bloat, connection pool saturation, or a maintenance window) that wasn’t communicated and hasn’t recovered. But since I can’t access Supabase slow query logs or pg_stat on the Starter plan, I can’t confirm it.
Questions:
Has anyone else on the Starter plan seen a sudden ~50% jump in Supabase node latency around this period?
Is there a known issue with Supabase node performance degradation over time (table growth, missing VACUUM)?
Any recommended way to diagnose this from within n8n without direct DB access?
Any help appreciated — the support ticket silence is frustrating for a production workflow. :folded_hands:

What is the error message (if any)?

Please share your workflow

(Select the nodes on your canvas and use the keyboard shortcuts CMD+C/CTRL+C and CMD+V/CTRL+V to copy and paste the workflow.)

Share the output returned by the last node

I’ve been experiencing a sudden and persistent latency degradation in my Supabase nodes (n8n-nodes-base.supabase) with no changes to my workflow logic. I’m on the Starter plan and have had a support ticket open for 4 days with no response, so I’m hoping someone in the community has seen the same behavior.
What happened:
On March 3, 2026, between approximately 19:58 and 21:30 UTC-3, something changed. Executions before that window ran normally; executions after it are consistently ~50% slower on all Supabase nodes. A second degradation event happened by March 9, pushing the same nodes to ~3x their original duration.
Measured before/after (same workflow, same payload size):
NodeBeforeAfter (Mar 3)After (Mar 9)INSERT t_message (in)~8,100ms~12,300ms~23,700msINSERT t_message (out)~8,200ms~12,300ms~23,800msUPDATE t_customer~3,400ms~5,100ms~9,800msUPDATE session_data (JSONB)~19,700ms~29,900ms~58,100ms
What I’ve already ruled out:
No changes to the nodes 5.10–5.13 between the two executions (confirmed via workflowData snapshots inside each execution)
Payload sizes are nearly identical across all executions (~337–755 chars)
The degradation is proportional across ALL Supabase nodes simultaneously (~50% across the board), suggesting infrastructure, not a specific query
The workflow did receive new nodes in a separate section (8.11.x) around the inflection point, but those nodes have no connection to the affected section
My setup:
n8n cloud, Starter plan
Supabase (PostgreSQL + pgvector)
Workflow: WF_00_MAIN — ~135 nodes
All Supabase operations are standard INSERT/UPDATE via the built-in Supabase node
My suspicion:
This looks like a Supabase-side event (autovacuum, dead tuple bloat, connection pool saturation, or a maintenance window) that wasn’t communicated and hasn’t recovered. But since I can’t access Supabase slow query logs or pg_stat on the Starter plan, I can’t confirm it.
Questions:
Has anyone else on the Starter plan seen a sudden ~50% jump in Supabase node latency around this period?
Is there a known issue with Supabase node performance degradation over time (table growth, missing VACUUM)?
Any recommended way to diagnose this from within n8n without direct DB access?
Any help appreciated — the support ticket silence is frustrating for a production workflow. :folded_hands:

Information on your n8n setup

  • n8n version:
  • Database (default: SQLite):
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app):
  • Operating system:

Hey, there was actually a confirmed Supabase incident on March 3 in us-east-2 so that lines up exactly with your first degradation. Also you actually can query pg_stat_statements from the SQL Editor in the Supabase dashboard even on Starter, its enabled by default on all projects so that should help you narrow down if its query-level or infra

Agreed, sounds like connection pool or infrastructure; with the proportional degradation across all nodes simultaneously. Ive been running straight local n8n ver 2.2.4 and use write to supabase via http nodes directly. maybe mentioned, but test switching out of native supabase to http node and check. otherwise yes checking your table’s dead tuple count via pg_stat user tables in SQL editor