Questions about the POSTGRES non-root user

I haven’t used my self-hosted n8n instance for a while and it seems one of the recent updates broke something.

I compared my docker compose with the current one and the changes I found where:

  • command: had changed
  • there seems to be new parameters and n8n seems to now use a non-root POSTGRES account.

Unfortunately simply adding these env variables to my existing postgres DB does not magically create them.

    POSTGRES_NON_ROOT_USER: non-root
    POSTGRES_NON_ROOT_PASSWORD: secretpw

I could of course continue using n8n with the Postgres root credentials, but I’d prefer to switch to a non-root user, any hints and tips on how to do that?

Hi @ovizii, you could run a query like below to create a new postgres role:

CREATE USER non-root WITH PASSWORD 'secretpw';
GRANT ALL PRIVILEGES ON DATABASE n8ndb TO non-root;

Just replace the values above with your actual username, password and database name. Once you’ve created your user you can update the credentials used by n8n to match the ones you’ve just created.

1 Like

Thanks that worked BUT in case anyone finds this thread later on:

One needs to connect to the docker container first:

docker exec -ti "db-container-name" bash

then connect to the DB

psql -d "db-name" -U "pqsql-root-user"

then execute the commands given above but know that “-” is apparently not allowed in a username :wink:

1 Like

still, apparently there is something still wrong here. I will try and search separately for this issue, just wondering if there is an upgrade guide somewhere which I missed?

n8n-db  | 2023-07-31 12:59:03.221 CEST [1] LOG:  starting PostgreSQL 14.8 on x86_64-pc-linux-musl, compiled by gcc (Alpine 12.2.1_git20220924-r10) 12.2.1 20220924, 64-bit
n8n-db  | 2023-07-31 12:59:03.222 CEST [1] LOG:  listening on IPv4 address "0.0.0.0", port 5432
n8n-db  | 2023-07-31 12:59:03.222 CEST [1] LOG:  listening on IPv6 address "::", port 5432
n8n-db  | 2023-07-31 12:59:03.238 CEST [1] LOG:  listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
n8n-db  | 2023-07-31 12:59:03.747 CEST [22] LOG:  database system was shut down at 2023-07-31 12:57:51 CEST
n8n-db  | 2023-07-31 12:59:03.961 CEST [1] LOG:  database system is ready to accept connections
n8n     | Last session crashed
n8n     | n8n ready on 0.0.0.0, port 5678
n8n-db  | 2023-07-31 12:59:28.103 CEST [44] ERROR:  relation "migrations" already exists
n8n-db  | 2023-07-31 12:59:28.103 CEST [44] STATEMENT:  CREATE TABLE "migrations" ("id" SERIAL NOT NULL, "timestamp" bigint NOT NULL, "name" character varying NOT NULL, CONSTRAINT "PK_8c82d7f526340ab734260ea46be" PRIMARY KEY ("id"))
n8n     | Error: There was an error running database migrations
n8n     | QueryFailedError: relation "migrations" already exists
n8n exited with code 1

Now this one is really odd, it seems like n8n is trying to treat your database like a completely fresh one. Can you confirm from which n8n version you have upgraded and which version you are using now?

I know it seems odd. Unfortunately, I let watchtower automatically update n8n so I don’t know when it broke.

All I can give you is the current n8n version. My instance is using this image (I used docker inspect to figure it out)

`"Image": "sha256:aef07a89a347a9c6f9a5a5fafa2d7918287db6d0ec3216de417f70590ae60479",`

"Env": [
 "N8N_VERSION=1.0.5"

If you can’t spot the issue, I may just start from scratch, I wasn’t running any automation yet, but I had connected my n8n instance to plenty of APIs in preparation. No big deal though.

Hi @ovizii, I am so sorry :frowning:

I suspect in this case it might be much simpler starting over with a genuinely new database rather than further digging into what exactly might have happened here with an unknown migration path.

You can easily transfer your previous data though, simply roll back to a backup from a working state before your upgrade, export your credentials via the CLI. Then start your new instance and simply import the previously exported credentials.

Going forward, you might want to disable Watchtower for the n8n container. We occasionally have breaking changes between individual versions, and these are easy to miss with automated upgrades.

2 Likes

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.