Supabase Database Connection Issues in Queue Mode

I have n8n deployed on fly.io, with one main instance and 1-3 worker instances that scale up based upon queue depth.

The database I use for storing the 8n database is postgres via Supabase.

n8n version is 1.105.2

As execution volume has started to increase lately, I have noticed a lot of connection issues coming up where are there timeouts or the web interface is unresponsive. All of these seem to stem from database connection issues.

My supabase db is on the pro plan. And is a micro instance, so 1 GB RAM and 2 vcpus.

Within Supabase, I have Pool Size set to 15. This is the default value. “The maximum number of connections made to the underlying Postgres cluster, per user+db combination. Pool size has a default of 15 based on your compute size of Micro.” Pool Size can be adjusted in Supabase

Max Client Connections in Supabase is 200 and cannot be adjusted.

For connecting to Supabase from n8n, I am using their transaction pooler (port 5678).

For fly.io, I have separate fly.toml’s for the main instance and the worker instances:

Here are some of the key points from the main fly.toml:

# DATABASE CONFIGURATION - Common for all processes
DB_TYPE = 'postgresdb'
DB_POSTGRESDB_DATABASE = 'postgres'
DB_POSTGRESDB_HOST = '(the supabase db url)'
DB_POSTGRESDB_PORT = '6543'
DB_POSTGRESDB_SCHEMA = 'n8n'
DB_POSTGRESDB_USER = 'postgres'

# DATABASE CONNECTION POOL CONFIGURATION
DB_POSTGRESDB_POOL_SIZE = '4'
DB_POSTGRESDB_CONNECTION_TIMEOUT_MILLIS = '40000'
DB_POSTGRESDB_IDLE_CONNECTION_TIMEOUT = '60000'

For the worker instances, it is:

# DATABASE CONFIGURATION - Common for all processes
DB_TYPE = 'postgresdb'
DB_POSTGRESDB_DATABASE = 'postgres'
DB_POSTGRESDB_HOST = '(the supabase db url)'
DB_POSTGRESDB_PORT = '6543'
DB_POSTGRESDB_SCHEMA = 'n8n'
DB_POSTGRESDB_USER = 'postgres'

# DATABASE CONNECTION POOL CONFIGURATION
DB_POSTGRESDB_POOL_SIZE = '3'
DB_POSTGRESDB_CONNECTION_TIMEOUT_MILLIS = '40000'
DB_POSTGRESDB_IDLE_CONNECTION_TIMEOUT = '60000'

The only difference in db settings between main and worker is the pool size. For main, it’s set to 4, for worker, it’s set to 3.

Previously, I had connection issues without those pool sizes explicitly set.

In order to have the database connection perform more reliably, are there changes to Supabase or Fly.io that I should make?

With using port 6543, the transaction pooler opens and closes db connections so there shouldn’t be stale connections open messing up the pool size.

Change pool settings in Supabase?
Change env vars for n8n?

Are there ways to get better insight directly on what is the actual bottlenecks and why there are often timeouts and the app becomes unresponsive?

Btw, the amount of execution load is currently about 20k executions per week, so this set up should be able to handle it.

Thanks.

Hi jkf,

I’m using a different approach to achieve the same goal: leveraging a PostgreSQL trigger node to connect to database used in my SupaBase. The tricky part was stripping everything from the link I pasted into the Host field, which looks like this: db.your-specific-address.supabase.co.

By the way, I recently migrated from Railway.com to Fly.io. Railway was getting a bit pricey for me, and since I participated in a Fly.io competition, they provided a generous amount of free credits, which made the switch even more appealing!

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.