N8N PostgreSQL - No network connection

n8n version: 2.9.1

  • Database : PostgreSQL
  • n8n EXECUTIONS_PROCESS setting: default: own, main
  • Running n8n via: npm & pm2
  • Operating system: Windows Server 2019 Standard

Describe the problem/error/question

Hello everyone,

For the context, I was using an old version of N8N (0.192.1) on MySQL with NPM & PM2 on Windows Server.

I never add issue with this version.

I wanted to update N8N at the end of 2025, when I found out that the V2’ll come soon and MySQL’ll not be supportted anymore. So I waited a bit more to do a fresh N8N V2 install on PostgreSQL still using NPM & PM2 on Windows Server.

I use multiple databases in my workflows. At first I created all my databases on PostgreSQL. Then I found out every time a Postgres node was executing the web interface goes Offline ; so no workflow saves, can’t change tab, can’t test new workflows…. I mean It’s a bit hard to use in this state.

I search what was wrong be found nothing, N8N is still executing in the background, there’s no log even in verbose mode, Posgres is OK ; just the web UI is Offline…

To limit the cas I switched back my databases to MySQL, only N8N is using PosgreSQL. I have lot of less of this issues but there’re still are.

Since I’m testing the new version I only have enabled few quick workflows (execution less than 5min, less than 100 000 entries). I’m scared of the result if I enable all of them

I think the issue is when N8N is using/saving datas in PosgreSQL, the web UI goes offline, but I did’nt find a way to fix it. I didn’t have issue in the old version with MySQL.

I hope you can help me.

Environment file

module.exports = {
    apps : [{
        name   : "n8n",
        env: {	
			//files
			N8N_USER_FOLDER:"C:",
			
			// déploiment
			N8N_HOST:"Server_Name",
			N8N_PROTOCOL:"https",	//http
			N8N_PORT:5678, //5678
			N8N_SSL_KEY:"SSL_Key_Path",
			N8N_SSL_CERT:"SSL_Perm_Path",
						
			// executions
			EXECUTIONS_DATA_SAVE_MANUAL_EXECUTIONS:true,	//false
			EXECUTIONS_DATA_PRUNE:true, //false
			EXECUTIONS_DATA_MAX_AGE:224,	//336
			
			// DB		
			DB_TYPE:"postgresdb",
			DB_POSTGRESDB_DATABASE:"n8n",
			DB_POSTGRESDB_PORT:5432,	//5432
			DB_POSTGRESDB_USER:"Database_User",
			DB_POSTGRESDB_PASSWORD:"password",
			//DB_POSTGRESDB_POOL_SIZE:10, //2
			//DB_POSTGRESDB_CONNECTION_TIMEOUT:360000, //20000

			// nodes
			N8N_PYTHON_ENABLED : false, //true
			NODES_EXCLUDE: "[]", //par défaut à partir de V2 : "[\"n8n-nodes-base.executeCommand\", \"n8n-nodes-base.localFileTrigger\"]"

			N8N_RESTRICT_FILE_ACCESS_TO:"Some_Directories",

			// timezone
			GENERIC_TIMEZONE : "Europe/Paris"
        }
    }]
}

Logs/Errors:

And If I restart N8N while it’s Offline:

1 Like

Hi @gabg Welcome!
Have you tried increasing the pool size of your database? Something like DB_POSTGRESDB_POOL_SIZE: 10 in your PM2 config.

Hello,
Yes I did try DB_POSTGRESDB_POOL_SIZE:10 & DB_POSTGRESDB_CONNECTION_TIMEOUT:360000.
That doesn’t fix the issue.

1 Like

@gabg Welcome to the community, could you also share how you deployed Postgres?

@gabg Umhm, you sure that the N8N_HOST/N8N_PROTOCOL are aligned with how you access n8n and ensured WebSocket headers/origin aren’t blocked?

I use https://Server_Name:5678/ to access N8N.

I don’t have error/warning messages.

The console only show me Timeout errors.

1 Like

Sorry, I not sure what informations do you want.

I used the PostgreSQL installer 18.2. I installed it with default settings.

image

@gabg another thing to try is that to set N8N_PUSH_BACKEND=sse in your PM2 env for the n8n app and restart it and maybe that would put some help in instance going offline.

I added in the env:
N8N_PUSH_BACKEND: “sse”, //websocket
DB_POSTGRESDB_POOL_SIZE:5, //2
DB_POSTGRESDB_CONNECTION_TIMEOUT:360000, //20000

I still have the issue.

I don’t have the rights do add a video:
2m41s execution ; 1m15s offline

By default, n8n sends requests to the /healthz endpoint every few seconds. If this endpoint is already reserved for another purpose (for example, Cloud Run reserves endpoints ending with z for the platform), the requests will fail. A related issue was reported on GitHub (#25958) but was resolved in version 2.10.0. Try to upgrade to this version and configure the N8N_ENDPOINT_HEALTH environment variable to a value other than the default.

I updated to 2.10.4, and changed theN8N_ENDPOINT_HEALTH. I still have some (not a lot) net::ERR_TIMED_OUT error in the consolewhile the UI is Offline

Do you think it may be because I’m using PostgreSLQ 18.2?

Here’s the full log of an execution of my template workflow, hope it may help ; the web UI was offline while executing the MySQL node:

16:00:05.339 e[34mdebuge[39m e[34mSkipped browserId check on /types/nodes.jsone[39m e[2m{ “file”: “auth.service.js”, “function”: “validateBrowserId” }e[22m
16:00:07.078 e[34mdebuge[39m e[34mSkipped browserId check on /types/credentials.jsone[39m e[2m{ “file”: “auth.service.js”, “function”: “validateBrowserId” }e[22m
2026-03-10T15:00:14.908Z [Rudder] debug: in flush
2026-03-10T15:00:14.908Z [Rudder] debug: cancelling existing flushTimer…
16:00:15.944 e[34mdebuge[39m e[34mSkipped browserId check on /types/credentials.jsone[39m e[2m{ “file”: “auth.service.js”, “function”: “validateBrowserId” }e[22m
16:00:16.373 e[34mdebuge[39m e[34mSkipped browserId check on /rest/pushe[39m e[2m{ “file”: “auth.service.js”, “function”: “validateBrowserId” }e[22m
16:00:16.374 e[34mdebuge[39m e[34mAdd editor-UI sessione[39m e[2m{ “pushRef”: “qspjdgjz19”, “file”: “abstract.push.js”, “function”: “add” }e[22m
16:00:16.455 e[34mdebuge[39m e[34mReceived message from editor-UIe[39m e[2m{ “pushRef”: “qspjdgjz19”, “msg”: { “type”: “workflowOpened”, “workflowId”: “V6Zxn9inoD2kk3Vu” }, “file”: “abstract.push.js”, “function”: “onMessageReceived” }e[22m
16:00:16.654 e[34mdebuge[39m e[34mPushed to frontend: collaboratorsChangede[39m e[2m{ “dataType”: “collaboratorsChanged”, “pushRefs”: “qspjdgjz19”, “file”: “abstract.push.js”, “function”: “sendTo” }e[22m
16:00:17.376 e[34mdebuge[39m e[34mPushed to frontend: executionRecoverede[39m e[2m{ “dataType”: “executionRecovered”, “pushRefs”: “qspjdgjz19”, “file”: “abstract.push.js”, “function”: “sendTo” }e[22m
16:00:17.377 e[34mdebuge[39m e[34mPushed to frontend: executionRecoverede[39m e[2m{ “dataType”: “executionRecovered”, “pushRefs”: “qspjdgjz19”, “file”: “abstract.push.js”, “function”: “sendTo” }e[22m
16:00:17.377 e[34mdebuge[39m e[34mPushed to frontend: executionRecoverede[39m e[2m{ “dataType”: “executionRecovered”, “pushRefs”: “qspjdgjz19”, “file”: “abstract.push.js”, “function”: “sendTo” }e[22m
16:00:17.377 e[34mdebuge[39m e[34mPushed to frontend: executionRecoverede[39m e[2m{ “dataType”: “executionRecovered”, “pushRefs”: “qspjdgjz19”, “file”: “abstract.push.js”, “function”: “sendTo” }e[22m
16:00:17.377 e[34mdebuge[39m e[34mPushed to frontend: executionRecoverede[39m e[2m{ “dataType”: “executionRecovered”, “pushRefs”: “qspjdgjz19”, “file”: “abstract.push.js”, “function”: “sendTo” }e[22m
16:00:17.377 e[34mdebuge[39m e[34mPushed to frontend: executionRecoverede[39m e[2m{ “dataType”: “executionRecovered”, “pushRefs”: “qspjdgjz19”, “file”: “abstract.push.js”, “function”: “sendTo” }e[22m
16:00:20.372 e[34mdebuge[39m e[34mExecution addede[39m e[2m{ “executionId”: “1268”, “file”: “active-executions.js”, “function”: “add” }e[22m
16:00:20.397 e[34mdebuge[39m e[34mExecution for workflow My workflow 2 was assigned id 1268e[39m e[2m{ “executionId”: “1268”, “file”: “workflow-runner.js”, “function”: “runMainProcess” }e[22m
16:00:20.419 e[34mdebuge[39m e[34mExecution ID 1268 will run executing all nodes.e[39m e[2m{ “executionId”: “1268”, “file”: “manual-execution.service.js”, “function”: “runManually” }e[22m
16:00:20.420 e[34mdebuge[39m e[34mWorkflow execution startede[39m e[2m{ “workflowId”: “V6Zxn9inoD2kk3Vu”, “file”: “logger-proxy.js”, “function”: “exports.debug” }e[22m
16:00:20.425 e[34mdebuge[39m e[34mExecuting hook (hookFunctionsPush)e[39m e[2m{ “executionId”: “1268”, “pushRef”: “qspjdgjz19”, “workflowId”: “V6Zxn9inoD2kk3Vu”, “file”: “execution-lifecycle-hooks.js” }e[22m
16:00:20.425 e[34mdebuge[39m e[34mPushed to frontend: executionStartede[39m e[2m{ “dataType”: “executionStarted”, “pushRefs”: “qspjdgjz19”, “file”: “abstract.push.js”, “function”: “sendTo” }e[22m
16:00:20.427 e[34mdebuge[39m e[34mStart executing node "Cron2"e[39m e[2m{ “node”: “Cron2”, “workflowId”: “V6Zxn9inoD2kk3Vu”, “file”: “logger-proxy.js”, “function”: “exports.debug” }e[22m
16:00:20.428 e[34mdebuge[39m e[34mExecuting hook on node “Cron2” (hookFunctionsPush)e[39m e[2m{ “executionId”: “1268”, “pushRef”: “qspjdgjz19”, “workflowId”: “V6Zxn9inoD2kk3Vu”, “file”: “execution-lifecycle-hooks.js” }e[22m
16:00:20.428 e[34mdebuge[39m e[34mPushed to frontend: nodeExecuteBeforee[39m e[2m{ “dataType”: “nodeExecuteBefore”, “pushRefs”: “qspjdgjz19”, “file”: “abstract.push.js”, “function”: “sendTo” }e[22m
16:00:20.428 e[34mdebuge[39m e[34mRunning node “Cron2” startede[39m e[2m{ “node”: “Cron2”, “workflowId”: “V6Zxn9inoD2kk3Vu”, “file”: “logger-proxy.js”, “function”: “exports.debug” }e[22m
16:00:20.439 e[34mdebuge[39m e[34mRegistered cron for workflowe[39m e[2m{ “scopes”: [“cron”], “workflowId”: “V6Zxn9inoD2kk3Vu”, “cron”: “12,30,45 1-23 * * *”, “instanceRole”: “leader”, “file”: “scheduled-task-manager.js”, “function”: “registerCron” }e[22m
16:00:20.443 e[34mdebuge[39m e[34mRegistered cron for workflowe[39m e[2m{ “scopes”: [“cron”], “workflowId”: “V6Zxn9inoD2kk3Vu”, “cron”: “35,45 0 * * *”, “instanceRole”: “leader”, “file”: “scheduled-task-manager.js”, “function”: “registerCron” }e[22m
16:00:20.444 e[34mdebuge[39m e[34mRunning node “Cron2” finished successfullye[39m e[2m{ “node”: “Cron2”, “workflowId”: “V6Zxn9inoD2kk3Vu”, “file”: “logger-proxy.js”, “function”: “exports.debug” }e[22m
16:00:20.445 e[34mdebuge[39m e[34mExecuting hook on node “Cron2” (hookFunctionsPush)e[39m e[2m{ “executionId”: “1268”, “pushRef”: “qspjdgjz19”, “workflowId”: “V6Zxn9inoD2kk3Vu”, “file”: “execution-lifecycle-hooks.js” }e[22m
16:00:20.446 e[34mdebuge[39m e[34mPushed to frontend: nodeExecuteAftere[39m e[2m{ “dataType”: “nodeExecuteAfter”, “pushRefs”: “qspjdgjz19”, “file”: “abstract.push.js”, “function”: “sendTo” }e[22m
16:00:20.446 e[34mdebuge[39m e[34mPushed to frontend: nodeExecuteAfterDatae[39m e[2m{ “dataType”: “nodeExecuteAfterData”, “pushRefs”: “qspjdgjz19”, “file”: “abstract.push.js”, “function”: “sendTo” }e[22m
16:00:20.446 e[34mdebuge[39m e[34mStart executing node "Microsoft SQL"e[39m e[2m{ “node”: “Microsoft SQL”, “workflowId”: “V6Zxn9inoD2kk3Vu”, “file”: “logger-proxy.js”, “function”: “exports.debug” }e[22m
16:00:20.447 e[34mdebuge[39m e[34mExecuting hook on node “Microsoft SQL” (hookFunctionsPush)e[39m e[2m{ “executionId”: “1268”, “pushRef”: “qspjdgjz19”, “workflowId”: “V6Zxn9inoD2kk3Vu”, “file”: “execution-lifecycle-hooks.js” }e[22m
16:00:20.447 e[34mdebuge[39m e[34mPushed to frontend: nodeExecuteBeforee[39m e[2m{ “dataType”: “nodeExecuteBefore”, “pushRefs”: “qspjdgjz19”, “file”: “abstract.push.js”, “function”: “sendTo” }e[22m
16:00:20.447 e[34mdebuge[39m e[34mRunning node “Microsoft SQL” startede[39m e[2m{ “node”: “Microsoft SQL”, “workflowId”: “V6Zxn9inoD2kk3Vu”, “file”: “logger-proxy.js”, “function”: “exports.debug” }e[22m
16:00:22.630 e[34mdebuge[39m e[34mRunning node “Microsoft SQL” finished successfullye[39m e[2m{ “node”: “Microsoft SQL”, “workflowId”: “V6Zxn9inoD2kk3Vu”, “file”: “logger-proxy.js”, “function”: “exports.debug” }e[22m
16:00:22.638 e[34mdebuge[39m e[34mExecuting hook on node “Microsoft SQL” (hookFunctionsPush)e[39m e[2m{ “executionId”: “1268”, “pushRef”: “qspjdgjz19”, “workflowId”: “V6Zxn9inoD2kk3Vu”, “file”: “execution-lifecycle-hooks.js” }e[22m
16:00:22.638 e[34mdebuge[39m e[34mPushed to frontend: nodeExecuteAftere[39m e[2m{ “dataType”: “nodeExecuteAfter”, “pushRefs”: “qspjdgjz19”, “file”: “abstract.push.js”, “function”: “sendTo” }e[22m
16:00:22.639 e[34mdebuge[39m e[34mPushed to frontend: nodeExecuteAfterDatae[39m e[2m{ “dataType”: “nodeExecuteAfterData”, “pushRefs”: “qspjdgjz19”, “file”: “abstract.push.js”, “function”: “sendTo” }e[22m
16:00:22.829 e[34mdebuge[39m e[34mStart executing node "Function"e[39m e[2m{ “node”: “Function”, “workflowId”: “V6Zxn9inoD2kk3Vu”, “file”: “logger-proxy.js”, “function”: “exports.debug” }e[22m
16:00:22.830 e[34mdebuge[39m e[34mExecuting hook on node “Function” (hookFunctionsPush)e[39m e[2m{ “executionId”: “1268”, “pushRef”: “qspjdgjz19”, “workflowId”: “V6Zxn9inoD2kk3Vu”, “file”: “execution-lifecycle-hooks.js” }e[22m
16:00:22.830 e[34mdebuge[39m e[34mPushed to frontend: nodeExecuteBeforee[39m e[2m{ “dataType”: “nodeExecuteBefore”, “pushRefs”: “qspjdgjz19”, “file”: “abstract.push.js”, “function”: “sendTo” }e[22m
16:00:22.830 e[34mdebuge[39m e[34mRunning node “Function” startede[39m e[2m{ “node”: “Function”, “workflowId”: “V6Zxn9inoD2kk3Vu”, “file”: “logger-proxy.js”, “function”: “exports.debug” }e[22m
16:00:27.337 e[34mdebuge[39m e[34mRunning node “Function” finished successfullye[39m e[2m{ “node”: “Function”, “workflowId”: “V6Zxn9inoD2kk3Vu”, “file”: “logger-proxy.js”, “function”: “exports.debug” }e[22m
16:00:27.344 e[34mdebuge[39m e[34mExecuting hook on node “Function” (hookFunctionsPush)e[39m e[2m{ “executionId”: “1268”, “pushRef”: “qspjdgjz19”, “workflowId”: “V6Zxn9inoD2kk3Vu”, “file”: “execution-lifecycle-hooks.js” }e[22m
16:00:27.344 e[34mdebuge[39m e[34mPushed to frontend: nodeExecuteAftere[39m e[2m{ “dataType”: “nodeExecuteAfter”, “pushRefs”: “qspjdgjz19”, “file”: “abstract.push.js”, “function”: “sendTo” }e[22m
16:00:27.344 e[34mdebuge[39m e[34mPushed to frontend: nodeExecuteAfterDatae[39m e[2m{ “dataType”: “nodeExecuteAfterData”, “pushRefs”: “qspjdgjz19”, “file”: “abstract.push.js”, “function”: “sendTo” }e[22m
16:00:27.559 e[34mdebuge[39m e[34mStart executing node "MySQL"e[39m e[2m{ “node”: “MySQL”, “workflowId”: “V6Zxn9inoD2kk3Vu”, “file”: “logger-proxy.js”, “function”: “exports.debug” }e[22m
16:00:27.559 e[34mdebuge[39m e[34mExecuting hook on node “MySQL” (hookFunctionsPush)e[39m e[2m{ “executionId”: “1268”, “pushRef”: “qspjdgjz19”, “workflowId”: “V6Zxn9inoD2kk3Vu”, “file”: “execution-lifecycle-hooks.js” }e[22m
16:00:27.560 e[34mdebuge[39m e[34mPushed to frontend: nodeExecuteBeforee[39m e[2m{ “dataType”: “nodeExecuteBefore”, “pushRefs”: “qspjdgjz19”, “file”: “abstract.push.js”, “function”: “sendTo” }e[22m
16:00:27.560 e[34mdebuge[39m e[34mRunning node “MySQL” startede[39m e[2m{ “node”: “MySQL”, “workflowId”: “V6Zxn9inoD2kk3Vu”, “file”: “logger-proxy.js”, “function”: “exports.debug” }e[22m
16:01:54.144 e[34mdebuge[39m e[34mQuerying database for waiting executionse[39m e[2m{ “scopes”: [“waiting-executions”], “file”: “wait-tracker.js”, “function”: “getWaitingExecutions” }e[22m
16:01:54.151 e[34mdebuge[39m e[34mReceived message from editor-UIe[39m e[2m{ “pushRef”: “qspjdgjz19”, “msg”: { “type”: “workflowClosed”, “workflowId”: “V6Zxn9inoD2kk3Vu” }, “file”: “abstract.push.js”, “function”: “onMessageReceived” }e[22m
16:01:55.387 e[34mdebuge[39m e[34mReceived message from editor-UIe[39m e[2m{ “pushRef”: “qspjdgjz19”, “msg”: { “type”: “workflowOpened”, “workflowId”: “V6Zxn9inoD2kk3Vu” }, “file”: “abstract.push.js”, “function”: “onMessageReceived” }e[22m
16:01:55.493 e[34mdebuge[39m e[34mPushed to frontend: collaboratorsChangede[39m e[2m{ “dataType”: “collaboratorsChanged”, “pushRefs”: “qspjdgjz19”, “file”: “abstract.push.js”, “function”: “sendTo” }e[22m
16:02:54.127 e[34mdebuge[39m e[34mQuerying database for waiting executionse[39m e[2m{ “scopes”: [“waiting-executions”], “file”: “wait-tracker.js”, “function”: “getWaitingExecutions” }e[22m
16:03:29.421 e[34mdebuge[39m e[34mRunning node “MySQL” finished successfullye[39m e[2m{ “node”: “MySQL”, “workflowId”: “V6Zxn9inoD2kk3Vu”, “file”: “logger-proxy.js”, “function”: “exports.debug” }e[22m
16:03:29.426 e[34mdebuge[39m e[34mExecuting hook on node “MySQL” (hookFunctionsPush)e[39m e[2m{ “executionId”: “1268”, “pushRef”: “qspjdgjz19”, “workflowId”: “V6Zxn9inoD2kk3Vu”, “file”: “execution-lifecycle-hooks.js” }e[22m
16:03:29.426 e[34mdebuge[39m e[34mPushed to frontend: nodeExecuteAftere[39m e[2m{ “dataType”: “nodeExecuteAfter”, “pushRefs”: “qspjdgjz19”, “file”: “abstract.push.js”, “function”: “sendTo” }e[22m
16:03:29.427 e[34mdebuge[39m e[34mPushed to frontend: nodeExecuteAfterDatae[39m e[2m{ “dataType”: “nodeExecuteAfterData”, “pushRefs”: “qspjdgjz19”, “file”: “abstract.push.js”, “function”: “sendTo” }e[22m
16:03:29.447 e[34mdebuge[39m e[34mWorkflow execution finished successfullye[39m e[2m{ “workflowId”: “V6Zxn9inoD2kk3Vu”, “file”: “logger-proxy.js”, “function”: “exports.debug” }e[22m
16:03:29.451 e[34mdebuge[39m e[34mExecuting hook (hookFunctionsSave)e[39m e[2m{ “executionId”: “1268”, “workflowId”: “V6Zxn9inoD2kk3Vu”, “file”: “execution-lifecycle-hooks.js” }e[22m
16:03:29.452 e[34mdebuge[39m e[34mSave execution data to database for execution ID 1268e[39m e[2m{ “executionId”: “1268”, “workflowId”: “V6Zxn9inoD2kk3Vu”, “finished”: true, “stoppedAt”: “2026-03-10T15: 03: 29.447Z”, “file”: “shared-hook-functions.js”, “function”: “updateExistingExecution” }e[22m
2026-03-10T15:03:29.946Z [Rudder] debug: no existing flush timer, creating new one
16:03:30.535 e[34mdebuge[39m e[34mExecuting hook (hookFunctionsPush)e[39m e[2m{ “executionId”: “1268”, “pushRef”: “qspjdgjz19”, “workflowId”: “V6Zxn9inoD2kk3Vu”, “file”: “execution-lifecycle-hooks.js” }e[22m
16:03:30.535 e[34mdebuge[39m e[34mPushed to frontend: executionFinishede[39m e[2m{ “dataType”: “executionFinished”, “pushRefs”: “qspjdgjz19”, “file”: “abstract.push.js”, “function”: “sendTo” }e[22m
16:03:30.536 e[34mdebuge[39m e[34mExecution finalizede[39m e[2m{ “executionId”: “1268”, “file”: “active-executions.js”, “function”: “finalizeExecution” }e[22m
16:03:30.536 e[34mdebuge[39m e[34mExecution removede[39m e[2m{ “executionId”: “1268”, “file”: “active-executions.js” }e[22m
2026-03-10T15:03:39.961Z [Rudder] debug: in flush
2026-03-10T15:03:39.961Z [Rudder] debug: cancelling existing flushTimer…

I have sometimes the log “31m503 Database is not ready!”.

I see many people have error 503 recently, I still didn’t find a fix for my case in the different posts.
I continue to search and try diverse things.

I enabled more of my workflow and N8N is almost unusable, it took up to 10m to have access to the web UI and the UI is available only for some minutes.

After more search I determined It’s not a hardware issue, not (directly) a postgreSQL issue, and not (directly) a N8N issue.
I think it’s the way of working nodeJS/PM2 → PostgreSQL.

I start to think the queue mode is my only way to fix the UI disconnection.

1 Like

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.