Impossible to restart n8n after restarting VPS

Hi @Elpatii
Still this error:

root@vps767290:~# docker-compose stop
root@vps767290:~# docker-compose rm
Going to remove root_n8n_1, root_traefik_1
Are you sure? [yN] y
Removing root_n8n_1     ... done
Removing root_traefik_1 ... done
root@vps767290:~# sudo docker-compose up -d
Creating root_traefik_1 ...
Creating root_traefik_1 ... error
WARNING: Host is already in use by another container

ERROR: for root_traefik_1  Cannot start service traefik: driver failed programming external connectivity on endpoint root_traefik_1 (a7a9a87827a62b058dad463e6d4c2bcdd1bc542bb26efe369540639ac390d59d): Error starting userland proxy: listen tcp 0.0.0.0:443: bind: address already in use
Creating root_n8n_1     ... error

ERROR: for root_n8n_1  Cannot start service n8n: driver failed programming external connectivity on endpoint root_n8n_1 (948804ce267c0f547a44b42fc96175957a21537877a258a99c3e261663b8735e): Error starting userland proxy: listen tcp 127.0.0.1:5678: bind: address already in use

ERROR: for traefik  Cannot start service traefik: driver failed programming external connectivity on endpoint root_traefik_1 (a7a9a87827a62b058dad463e6d4c2bcdd1bc542bb26efe369540639ac390d59d): Error starting userland proxy: listen tcp 0.0.0.0:443: bind: address already in use

ERROR: for n8n  Cannot start service n8n: driver failed programming external connectivity on endpoint root_n8n_1 (948804ce267c0f547a44b42fc96175957a21537877a258a99c3e261663b8735e): Error starting userland proxy: listen tcp 127.0.0.1:5678: bind: address already in use
ERROR: Encountered errors while bringing up the project.

Then sadly no idea what is going on here. If you stop and delete everything via docker-compose and afterward really no other containersgets listed if you run docker ps (as in one of the previous posts above) I have no idea why these ports are still blocked. (unless you have a outside of Docker some kind of webserver running and additionally n8n)

Maybe restart your machine after doing the docker-compose rm and only after the restart run the docker-compose up.

You can also try to run sudo lsof -i -P -n | grep LISTEN. That should display you all the ports that are in use and by what. You should run that after the rm and before the up to see if all ports are really free.

Hi @jan

Sorry for the late reply.

root@vps767290:~# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3da5381f524d traefik “/entrypoint.sh --ap…” 10 days ago Created root_traefik_1
70bff1987a58 n8nio/n8n “tini – /docker-ent…” 10 days ago Created root_n8n_1

root@vps767290:~# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

root@vps767290:~# sudo lsof -i -P -n | grep LISTEN
systemd-r 830 systemd-resolve 13u IPv4 17005 0t0 TCP 127.0.0.53:53 (LISTEN)
java 968 tomcat 61u IPv4 23749 0t0 TCP *:8080 (LISTEN)
java 968 tomcat 66u IPv4 23791 0t0 TCP *:8009 (LISTEN)
java 968 tomcat 87u IPv4 35539 0t0 TCP 127.0.0.1:8005 (LISTEN)
sshd 1093 root 3u IPv4 19871 0t0 TCP *:22500 (LISTEN)
sshd 1093 root 4u IPv6 19873 0t0 TCP *:22500 (LISTEN)
mysqld 1193 mysql 17u IPv4 21344 0t0 TCP 127.0.0.1:3306 (LISTEN)
postgres 1194 postgres 5u IPv4 19984 0t0 TCP 127.0.0.1:5432 (LISTEN)
postgres 1200 postgres 7u IPv4 19995 0t0 TCP 127.0.0.1:5433 (LISTEN)
docker-pr 29121 root 4u IPv4 178125 0t0 TCP *:443 (LISTEN)
docker-pr 29129 root 4u IPv6 178805 0t0 TCP *:443 (LISTEN)
docker-pr 29139 root 4u IPv4 178154 0t0 TCP 127.0.0.1:5678 (LISTEN)

Is it normal that docker ps command return nothing but docker ps -a return some results?

No, docker ps command will output the list of container which are up and running.

Here is how I installed n8n in docker with nginx as reverse proxy.

So, Use Portainer to manage the docker containers visually. Updating n8n is much easier with click of buttons.

1 Like

Strange, was pretty sure that stopped docker containers do not block the ports anymore.

You can try to delete all stopped containers with: docker rm $(docker ps -aq) and then check again if the ports are still blocked.

1 Like

I try to delete all containers, but still same error:

root@vps767290:~# docker rm $(docker ps -aq)
3da5381f524d
70bff1987a58
root@vps767290:~# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

root@vps767290:~# docker-compose up -d
Creating root_n8n_1 …
Creating root_traefik_1 …
Creating root_traefik_1 … error

ERROR: for root_traefik_1 Cannot start service traefik: driver failed programming external connectivity on endpoint root_traefik_1 (f26f1d2e3ee67cee9dcc73209f5b089ba2e5f1f9ee851f7d9ce1c29b543c758f): Error starting userland proxy: lisCreating root_n8n_1 … error
WARNING: Host is already in use by another container

ERROR: for root_n8n_1 Cannot start service n8n: driver failed programming external connectivity on endpoint root_n8n_1 (b8b391a80284c130cca3cb29db07bd1911fd4ff5bf7b722a1d6911c2d7630e4c): Error starting userland proxy: listen tcp 127.0.0.1:5678: bind: address already in use

ERROR: for traefik Cannot start service traefik: driver failed programming external connectivity on endpoint root_traefik_1 (f26f1d2e3ee67cee9dcc73209f5b089ba2e5f1f9ee851f7d9ce1c29b543c758f): Error starting userland proxy: listen tcp 0.0.0.0:443: bind: address already in use

ERROR: for n8n Cannot start service n8n: driver failed programming external connectivity on endpoint root_n8n_1 (b8b391a80284c130cca3cb29db07bd1911fd4ff5bf7b722a1d6911c2d7630e4c): Error starting userland proxy: listen tcp 127.0.0.1:5678: bind: address already in use
ERROR: Encountered errors while bringing up the project.

root@vps767290:~# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

Is it normal it try to create 2 root_traefik_1?

Hi, it’s normal that docker ps and docker ps -a does not return the thing:

The docker ps command only shows running containers by default. To see all containers, use the -a (or --all ) flag

Docs: docker ps | Docker Docs

If you have no containers returned by the command docker ps -a, so you are a local program in your server that use ports you need.

Hi @Elpatii

These are my used ports:

root@vps767290:~# sudo lsof -i -P -n
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
systemd-n 803 systemd-network 15u IPv4 1376006 0t0 UDP 51.178.31.26:68
systemd-r 830 systemd-resolve 12u IPv4 17004 0t0 UDP 127.0.0.53:53
systemd-r 830 systemd-resolve 13u IPv4 17005 0t0 TCP 127.0.0.53:53 (LISTEN)
java 968 tomcat 61u IPv4 23749 0t0 TCP *:8080 (LISTEN)
java 968 tomcat 66u IPv4 23791 0t0 TCP *:8009 (LISTEN)
java 968 tomcat 79u IPv4 1382250 0t0 TCP 127.0.0.1:52584->127.0.0.1:5432 (ESTABLISHED)
java 968 tomcat 86u IPv4 1383465 0t0 TCP 127.0.0.1:52592->127.0.0.1:5432 (ESTABLISHED)
java 968 tomcat 87u IPv4 35539 0t0 TCP 127.0.0.1:8005 (LISTEN)
java 968 tomcat 88u IPv4 1382303 0t0 TCP 127.0.0.1:52586->127.0.0.1:5432 (ESTABLISHED)
java 968 tomcat 89u IPv4 1382344 0t0 TCP 127.0.0.1:52588->127.0.0.1:5432 (ESTABLISHED)
java 968 tomcat 90u IPv4 1383002 0t0 TCP 127.0.0.1:52590->127.0.0.1:5432 (ESTABLISHED)
java 968 tomcat 97u IPv4 897347 0t0 TCP 51.178.31.26:8009->125.64.94.144:39444 (ESTABLISHED)
java 968 tomcat 590u IPv4 1383162 0t0 TCP 127.0.0.1:52594->127.0.0.1:5432 (ESTABLISHED)
sshd 1093 root 3u IPv4 19871 0t0 TCP *:22500 (LISTEN)
sshd 1093 root 4u IPv6 19873 0t0 TCP *:22500 (LISTEN)
mysqld 1193 mysql 17u IPv4 21344 0t0 TCP 127.0.0.1:3306 (LISTEN)
postgres 1194 postgres 5u IPv4 19984 0t0 TCP 127.0.0.1:5432 (LISTEN)
postgres 1194 postgres 9u IPv4 21279 0t0 UDP 127.0.0.1:36916->127.0.0.1:36916
postgres 1200 postgres 7u IPv4 19995 0t0 TCP 127.0.0.1:5433 (LISTEN)
postgres 1200 postgres 9u IPv4 21271 0t0 UDP 127.0.0.1:59026->127.0.0.1:59026
postgres 1230 postgres 9u IPv4 21271 0t0 UDP 127.0.0.1:59026->127.0.0.1:59026
postgres 1231 postgres 9u IPv4 21271 0t0 UDP 127.0.0.1:59026->127.0.0.1:59026
postgres 1232 postgres 9u IPv4 21271 0t0 UDP 127.0.0.1:59026->127.0.0.1:59026
postgres 1233 postgres 9u IPv4 21271 0t0 UDP 127.0.0.1:59026->127.0.0.1:59026
postgres 1234 postgres 9u IPv4 21271 0t0 UDP 127.0.0.1:59026->127.0.0.1:59026
postgres 1235 postgres 9u IPv4 21271 0t0 UDP 127.0.0.1:59026->127.0.0.1:59026
postgres 1247 postgres 9u IPv4 21279 0t0 UDP 127.0.0.1:36916->127.0.0.1:36916
postgres 1248 postgres 9u IPv4 21279 0t0 UDP 127.0.0.1:36916->127.0.0.1:36916
postgres 1249 postgres 9u IPv4 21279 0t0 UDP 127.0.0.1:36916->127.0.0.1:36916
postgres 1250 postgres 9u IPv4 21279 0t0 UDP 127.0.0.1:36916->127.0.0.1:36916
postgres 1251 postgres 9u IPv4 21279 0t0 UDP 127.0.0.1:36916->127.0.0.1:36916
postgres 1252 postgres 9u IPv4 21279 0t0 UDP 127.0.0.1:36916->127.0.0.1:36916
postgres 22725 postgres 9u IPv4 21279 0t0 UDP 127.0.0.1:36916->127.0.0.1:36916
postgres 22725 postgres 10u IPv4 1382929 0t0 TCP 127.0.0.1:5432->127.0.0.1:52584 (ESTABLISHED)
postgres 22750 postgres 9u IPv4 21279 0t0 UDP 127.0.0.1:36916->127.0.0.1:36916
postgres 22750 postgres 10u IPv4 1382954 0t0 TCP 127.0.0.1:5432->127.0.0.1:52586 (ESTABLISHED)
postgres 22774 postgres 9u IPv4 21279 0t0 UDP 127.0.0.1:36916->127.0.0.1:36916
postgres 22774 postgres 10u IPv4 1382998 0t0 TCP 127.0.0.1:5432->127.0.0.1:52588 (ESTABLISHED)
postgres 22775 postgres 9u IPv4 21279 0t0 UDP 127.0.0.1:36916->127.0.0.1:36916
postgres 22775 postgres 10u IPv4 1383003 0t0 TCP 127.0.0.1:5432->127.0.0.1:52590 (ESTABLISHED)
postgres 22833 postgres 9u IPv4 21279 0t0 UDP 127.0.0.1:36916->127.0.0.1:36916
postgres 22833 postgres 10u IPv4 1383466 0t0 TCP 127.0.0.1:5432->127.0.0.1:52592 (ESTABLISHED)
postgres 22843 postgres 9u IPv4 21279 0t0 UDP 127.0.0.1:36916->127.0.0.1:36916
postgres 22843 postgres 10u IPv4 1383468 0t0 TCP 127.0.0.1:5432->127.0.0.1:52594 (ESTABLISHED)
sshd 22850 root 3u IPv4 1383486 0t0 TCP 51.178.31.26:22500->176.145.90.26:17459 (ESTABLISHED)
docker-pr 29121 root 4u IPv4 178125 0t0 TCP *:443 (LISTEN)
docker-pr 29129 root 4u IPv6 178805 0t0 TCP *:443 (LISTEN)
docker-pr 29139 root 4u IPv4 178154 0t0 TCP 127.0.0.1:5678 (LISTEN)

Hi @frankl1,

So you have 443 and 5678 ports already listening according to your output:

This is why your container won’t start.
I find a post on StackOverflow where the problem is the same than yours: Docker, host-OS restart and busy ports - Stack Overflow
Maybe you can help you from this post to resolve your issue.