ERROR: duplicate key value violates unique constraint

Self-hosted instance of n8n version 1.31.2. Just set up. Created 1 workflow with 1 webhook and one node. Saved. Tested. All works.

Problem:
Every time I restart the stack of n8n and PostgreSQL I see warnings in the docker compose logs.

Not sure what to make of this, saw multiple people posting about the same error without any fitting answer for me.

n8n-postgresql  | 2024-03-08 18:48:16.785 CET [1] LOG:  database system is ready to accept connections
n8n             | User settings loaded from: /home/node/.n8n/config
n8n             | Initializing n8n process
n8n             | n8n ready on 0.0.0.0, port 5678
n8n             | Version: 1.31.2
n8n             |  ================================
n8n             |    Start Active Workflows:
n8n             |  ================================
n8n             |    - "outline2apprise2ntfyk20" (ID: 1Hqj2qu4iPjSD0os)
n8n-postgresql  | 2024-03-08 18:48:31.464 CET [37] ERROR:  duplicate key value violates unique constraint "PK_b21ace2e13596ccd87dc9bf4ea6"
n8n-postgresql  | 2024-03-08 18:48:31.464 CET [37] DETAIL:  Key ("webhookPath", method)=(f47eb632-fd01-4a9f-9b71-661b364cb92b, POST) already exists.
n8n-postgresql  | 2024-03-08 18:48:31.464 CET [37] STATEMENT:  INSERT INTO "public"."webhook_entity"("workflowId", "webhookPath", "method", "node", "webhookId", "pathLength") VALUES ($1, $2, $3, $4, DEFAULT, DEFAULT)
n8n             |      => Started
n8n             |
n8n             | Editor is now accessible via:

n8n: 1.31.2
db: postgresql-14-alpine
n8n running via Docker
OS: Debian Bookworm
EXECUTIONS_PROCESS - no idea what that is, I searched and it seems to have been deprecated since [email protected] 2023-05-03

hello @ovizii

Can you provide more details of the setup?

  • n8n version:
  • Database (default: SQLite):
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app):
  • Operating system:

According to the error, it seems that your webhook causing this

Sorry about that :frowning:

n8n: 1.31.2
db: postgresql-14-alpine
EXECUTIONS_PROCESS - no idea what that is
n8n running via Docker
OS: Debian Bookworm

btw. I have exported my one workflow and completely reset this instance. While doing that I updated the init-data.sh from here: n8n/docker/compose/withPostgres at master · n8n-io/n8n · GitHub and switched to a newer postgresql-16-alpine.

Basically: started a completely fresh instance, imported this one workflow, saved, restarted the docker compose stack and upon restart, I again see the error above.

Does the issue persist if you delete the Webhook Node and create a new one, then save the workflow?

Didn’t help. The error message now complains exactly the same as above except about a different webhook. I created a second workflow also containing a webhook and now I get error messages for both.

Open for ideas, happy to remove the DB and try again as these are my first two workflows. I can remove all content of n8n and PG and have a clean slate in2 minutes.

Maybe something is wrong with the n8n+postgres container. I’m personally not a big fan of using multiservice containers, as they break the docker principle (one service per container).

Try this setup. One instance for n8n and another one for postgres.

Postgres setup:

docker volume create postgres_data

docker run -it -d \
 --name postgres_test \
 -p 5432:5432 \
 -e POSTGRES_PASSWORD=123456789 \
 -e POSTGRES_USER=n8n_user \
 -e POSTGRES_DB=n8n \
 -v postgres_data:/var/lib/postgresql/data \
 postgres

n8n setup:

docker volume create n8n_data

docker run -it -d \
 --name n8n_test \
 -p 5678:5678 \
 -e DB_TYPE=postgresdb \
 -e DB_POSTGRESDB_DATABASE=n8n \
 -e DB_POSTGRESDB_HOST=172.17.0.3 \
 -e DB_POSTGRESDB_PORT=5432 \
 -e DB_POSTGRESDB_USER=n8n_user \
 -e DB_POSTGRESDB_SCHEMA=public \
 -e DB_POSTGRESDB_PASSWORD=123456789 \
 -v n8n_data:/home/node/.n8n \
 docker.n8n.io/n8nio/n8n

where DB_POSTGRESDB_HOST is the IP of the Postgres DB, that you can find with the command

docker inspect postgres_test | grep "IPAddress"

I’m not sure what you mean, I do have one docker-compose.yml file which creates two containers: one for n8n and one for PostgreSQL. There is no difference whether I use docker compose or docker run to create these two containers as they are still two separate containers.

I’m sure the issue is somewhere with n8n or me, definitely not with the compose file. The compose file simply starts these two containers with the parameters I give it and which I have taken from the n8n docs.
Especially since I have deleted all n8n and PostgreSQL data and started with fresh containers twice already.

I found several issues in this forum when searching for:
ERROR: duplicate key value violates unique constraint webhook
but unfortunately none had an applicable solution for me.

Do you happen to also have a workflow with a webhook by any chance? Do you see any of these errors when restarting your PostgreSQL container?

No, I’ve tried some options, but didn’t catch such issue. My workflow with webhook has been created and loaded later with no errors

Thanks for checking. I am all out of ideas. All I can try is post my docker-compsoe.yml here and hope somebody spots an error. This is running behind traefik as reverse proxy.

# https://docs.n8n.io/

services:

  n8n:
    image: n8nio/n8n:latest
    container_name: n8n
    hostname: n8n
    restart: "no"
    networks:
      n8n:
        ipv4_address: 192.168.192.126
      n8n_internal:
      traefik_n8n:
    depends_on:
      n8n-postgresql:
        condition: service_healthy
    cpus: 1
    mem_limit: 1G
    labels:
      - "traefik.enable=true"
      - "traefik.docker.network=traefik_n8n"
# https://doc.traefik.io/traefik/v2.0/routing/routers/#priority higher # equals higher priority
# router #2 for secured access through goauthentik
      - "traefik.http.routers.auto.tls=true"
      - "traefik.http.routers.auto.entrypoints=websecure"
      - "traefik.http.routers.auto.rule=Host(`auto.domain.tld`)"
      - "traefik.http.routers.auto.middlewares=secHeaders@file,authentik@docker"
      - "traefik.http.routers.auto.service=auto"
      - "traefik.http.routers.auto.priority=1"
      - "traefik.http.services.auto.loadbalancer.server.port=5678"
# router #1 for unsecured access to webhooks
      - "traefik.http.routers.auto-webhook.tls=true"
      - "traefik.http.routers.auto-webhook.entrypoints=websecure"
      - "traefik.http.routers.auto-webhook.rule=Host(`auto.domain.tld`) && PathPrefix(`/webhook/`,`/webhook-test/`)"
      - "traefik.http.routers.auto-webhook.middlewares=secHeaders@file"
      - "traefik.http.routers.auto-webhook.service=auto-webhook"
      - "traefik.http.routers.auto-webhook.priority=2"
      - "traefik.http.services.auto-webhook.loadbalancer.server.port=5678"
# find all env variables here: https://docs.n8n.io/hosting/configuration/environment-variables/#deployment
    environment:
      - N8N_HOST=auto.domain.tld
      - N8N_PROTOCOL=https
      - N8N_PORT=5678
      - WEBHOOK_URL=https://auto.domain.tld/
      - NODE_ENV=production
      - GENERIC_TIMEZONE=Europe/Berlin
      - TZ=Europe/Berlin
      - EXECUTIONS_DATA_SAVE_ON_ERROR=all
      - EXECUTIONS_DATA_SAVE_ON_SUCCESS=all
      - EXECUTIONS_DATA_SAVE_ON_PROGRESS=true
      - EXECUTIONS_DATA_SAVE_MANUAL_EXECUTIONS=true
      - EXECUTIONS_DATA_PRUNE=true
      - EXECUTIONS_DATA_MAX_AGE=14
      - EXECUTIONS_DATA_PRUNE_MAX_COUNT=99
      - N8N_EMAIL_MODE=smtp
      - N8N_SMTP_HOST=192.168.192.2
      - N8N_SMTP_PORT=25
      - N8N_SMTP_USER=
      - N8N_SMTP_PASS=
      - N8N_SMTP_SSL=false
      - N8N_SMTP_SENDER=n8n
      - DB_TYPE=postgresdb
      - DB_POSTGRESDB_HOST=n8n-postgresql
      - DB_POSTGRESDB_PORT=5432
      - DB_POSTGRESDB_DATABASE=${POSTGRES_DB}
      - DB_POSTGRESDB_USER=${POSTGRES_NON_ROOT_USER}
      - DB_POSTGRESDB_PASSWORD=${POSTGRES_NON_ROOT_PASSWORD}
      - N8N_DIAGNOSTICS_ENABLED=false
      - N8N_DIAGNOSTICS_CONFIG_FRONTEND=
      - N8N_DIAGNOSTICS_CONFIG_BACKEND=
    volumes:
      - ./.n8n:/home/node/.n8n
      - ./local-files:/files
#    command: start --tunnel
    healthcheck:
      test: wget --spider http://localhost:5678/healthz > /dev/null 2>&1 || exit 1
      interval: 30s
      timeout: 10s
      retries: 3
      start_period: 1m00s



  n8n-postgresql:
      image: postgres:16-alpine
      container_name: n8n-postgresql
      hostname: n8n-postgresql
      restart: "no"
      networks:
        - n8n_internal
      cpus: 1
      mem_limit: 1G
      environment:
        - TZ=Europe/Berlin
        - POSTGRES_USER
        - POSTGRES_PASSWORD
        - POSTGRES_DB
        - POSTGRES_NON_ROOT_USER
        - POSTGRES_NON_ROOT_PASSWORD
      volumes:
        - ./db:/var/lib/postgresql/data
        - ./init-data.sh:/docker-entrypoint-initdb.d/init-data.sh
      healthcheck:
        test: pg_isready -U n8n -d n8n
        interval: 10s
        timeout: 2s
        retries: 10

networks:

  traefik_n8n:
    name: traefik_n8n
    external: true
    internal: true

  n8n_internal:
    name: n8n_internal
    external: false
    internal: true

  n8n:
    name: n8n
    driver: macvlan
    ipam:
      config:
        - subnet: 192.168.192.124/30
          gateway: 192.168.192.125
    driver_opts:
      parent: vmbr1.1031
    external: false
    internal: false

At first glance, everything is correct.

Can you check with pgAdmin what is happening in the DB after the webhook has been created and saved? It should reside in the db_name >> workflows view

I can provide the query later

I’m not familiar with pgAdmin but given a command, I can of course enter the PostgreSQL container and execute it.

Hey guys, I’m running into this same problem, but I’m using n8n inside a Google Cloud Run service container talking to a completely external Postgres DB.

I think the issue happens whenever the n8n container “restarts”, but I’m not 100% sure.

1 Like

We have a dev ticket somewhere to fix this, from what I remember nothing is actually broken and it is to do with registering webhooks on startup to make sure they are unique.

2 Likes

Not sure how to handle this but I keep getting nagging emails from this forum urging me to mark the right answer and this issue as solved.

I usualy do not do that unless there really has been an answer resolving the issue posted, how do you handle that in this forum?

You can leave it as is. The topic will be closed automatically in 3 months if there are no more messages