Duplicate key value violates unique constraint "workflow_statistics_pkey"

I see the same here. Workflows running a long time, not working reliably.

image: n8nio/n8n:0.214.3 on docker

It’s when I want to access to workflow history to track error on it.
The loader appears, and the app is freezing and if I want to access to another items (like workflow list or credentials) the app is very slow.

We upgraded to 0.214.2, I have to check if the problem is solved with this version .

Thx for your support

It seems to be related to using postgres in my case.
Compared the behavior of the same workflow in a local non-db docker-compose stack.

The remote stack uses postgres and runs behind a haproxy, while the local stack uses no db container and runs behind a traefik reverse proxy (I noticed the long and failing executions and assumed that maybe the reverse proxy also plays a role here).

The postgres container logs:

n8n-postgres-1          | 2023-02-13 09:52:05.647 UTC [1] LOG:  database system is ready to accept connections
n8n-postgres-1          | 2023-02-13 09:52:25.146 UTC [62] ERROR:  permission denied to create extension "uuid-ossp"
n8n-postgres-1          | 2023-02-13 09:52:25.146 UTC [62] HINT:  Must be superuser to create this extension.
n8n-postgres-1          | 2023-02-13 09:52:25.146 UTC [62] STATEMENT:  CREATE EXTENSION IF NOT EXISTS "uuid-ossp"
n8n-postgres-1          | 2023-02-13 09:52:48.152 UTC [111] ERROR:  permission denied to create extension "uuid-ossp"
n8n-postgres-1          | 2023-02-13 09:52:48.152 UTC [111] HINT:  Must be superuser to create this extension.
n8n-postgres-1          | 2023-02-13 09:52:48.152 UTC [111] STATEMENT:  CREATE EXTENSION IF NOT EXISTS "uuid-ossp"
n8n-postgres-1          | 2023-02-13 09:52:48.690 UTC [72] ERROR:  duplicate key value violates unique constraint "workflow_statistics_pkey"
n8n-postgres-1          | 2023-02-13 09:52:48.690 UTC [72] DETAIL:  Key ("workflowId", name)=(26, data_loaded) already exists.
n8n-postgres-1          | 2023-02-13 09:52:48.690 UTC [72] STATEMENT:  INSERT INTO "public"."workflow_statistics"("count", "latestEvent", "name", "workflowId") VALUES ($1, $2, $3, $4)
n8n-postgres-1          | 2023-02-13 09:52:48.975 UTC [72] ERROR:  duplicate key value violates unique constraint "workflow_statistics_pkey"
n8n-postgres-1          | 2023-02-13 09:52:48.975 UTC [72] DETAIL:  Key ("workflowId", name)=(26, data_loaded) already exists.

Hey @sgw,

Are you seeing failures for n8n to load or just the error lines in the log file? Looking at the history on this thread it was confirmed to be postgres and was down to a try / catch being used.

@jon executions run long and don’t finish very often.

A workaround is to (try to) stop them, click “Workflows” on the left to exit the workflow editor and go back in, the re-execute. This is rather problematic as it slows down the development process.

I think in my case I could easily live without pgsql, only a handful of rather small workflows so far. But I don’t know if there is an easy way to “downgrade”. Sure, I could copy-paste the workflows maybe. Would be nice to have a working howto. Or a fixed stack with postgresql :wink: thanks

You could use the CLI to export then update the env options and import them but that is not ideal, If it is workflows running long though do you have a lot of data in the execution_entity table?

Nothing running long here. No execution history at all in the GUI right now.

There was one test workflow with a webhook activated. Disabled that one now. Will look closer later, video call now.

Same problem here:

n8n-postgressdb | 2023-02-15 15:37:09.169 UTC [89404] ERROR:  duplicate key value violates unique constraint "workflow_statistics_pkey"
n8n-postgressdb | 2023-02-15 15:37:09.169 UTC [89404] DETAIL:  Key ("workflowId", name)=(28, data_loaded) already exists.
n8n-postgressdb | 2023-02-15 15:37:09.169 UTC [89404] STATEMENT:  INSERT INTO "public"."workflow_statistics"("count", "latestEvent", "name", "workflowId") VALUES ($1, $2, $3, $4)
n8n-postgressdb | 2023-02-15 15:37:13.978 UTC [89404] ERROR:  duplicate key value violates unique constraint "workflow_statistics_pkey"
n8n-postgressdb | 2023-02-15 15:37:13.978 UTC [89404] DETAIL:  Key ("workflowId", name)=(28, data_loaded) already exists.
n8n-postgressdb | 2023-02-15 15:37:13.978 UTC [89404] STATEMENT:  INSERT INTO "public"."workflow_statistics"("count", "latestEvent", "name", "workflowId") VALUES ($1, $2, $3, $4)

It started after I upgrade N8N from version 0.209.2 to 0.215.2 (yesterday)

**n8n version: 0.215.2
**Database you’re using: postgres:14.5
**Running n8n via: Docker

Until now I dont notice any performance change (40 active workflows),

should I downgrade N8N version?

Is n8n running slowly or not working? Looking at the post from @MutedJam this sort of thing is somewhat expected at the moment.

Same issue here. n8n v.221.2 & postgres-ha.

My n8n has hundreds of thousands of executions per day. I believe what @MutedJam said was the cause, that n8n logs the ‘error’ whenever an execution occurs and write to an existing row, instead of doing something like ‘increment if exists’.

I can’t tell if my n8n’s performance is affected, but the log file is getting quite crazy lol. It’d be great if you guys could find a solution to this. Thank you for the hard work!

Hi @Jiuhk, welcome to the community!

I’m with you here, this upsets me every time I see it. Up to the point where I now have one n8n-specific Postgres instance for which I can simply throw away most logs.

Unfortunately I haven’t seen any roadmap items that would suggest this is being fixed anytime soon.

That said, the n8n performance shouldn’t be affected as these errors are logged by Postgres rather than n8n. Perhaps you might want to look at the logs of just n8n for the time being and consider clearing the Postgres logs more frequently?

1 Like

I am using 0.222.1 and postgres 13 and this error still exists. I don’t believe this is ever gonna be fix hehehe

2023-04-07 00:51:07.929 UTC [125183] n8n01@n8n01 ERROR:  duplicate key value violates unique constraint "workflow_statistics_pkey"
2023-04-07 00:51:07.929 UTC [125183] n8n01@n8n01 DETAIL:  Key ("workflowId", name)=(10, data_loaded) already exists.
2023-04-07 00:51:07.929 UTC [125183] n8n01@n8n01 STATEMENT:  INSERT INTO "public"."workflow_statistics"("count", "latestEvent", "name", "workflowId") VALUES ($1, $2, $3, $4)

and I do have executions that never stop unless I restart the process manually. By the way, my N8N is on queue mode.

Hi @egorky, welcome to the community! Could you try upgrading to [email protected]? This should contain a fix for the problem.

thanks a lot @MutedJam , I have upgraded it and it does not happen anymore.

sorry, I was watching the wrong log file. This is still happening. I am using n8n version 0.225.1 and postgreSQL 13.10.1

2023-04-24 00:22:30.149 -05 [749501] n8n01@n8n01 ERROR:  duplicate key value violates unique constraint "workflow_statistics_pkey"
2023-04-24 00:22:30.149 -05 [749501] n8n01@n8n01 DETAIL:  Key ("workflowId", name)=(12, data_loaded) already exists.
2023-04-24 00:22:30.149 -05 [749501] n8n01@n8n01 STATEMENT:  INSERT INTO "public"."workflow_statistics"("count", "latestEvent", "name", "workflowId") VALUES ($1, $2, $3, $4)
2023-04-24 00:22:42.210 -05 [749527] n8n01@n8n01 ERROR:  duplicate key value violates unique constraint "workflow_statistics_pkey"
2023-04-24 00:22:42.210 -05 [749527] n8n01@n8n01 DETAIL:  Key ("workflowId", name)=(12, data_loaded) already exists.
2023-04-24 00:22:42.210 -05 [749527] n8n01@n8n01 STATEMENT:  INSERT INTO "public"."workflow_statistics"("count", "latestEvent", "name", "workflowId") VALUES ($1, $2, $3, $4)
2023-04-24 00:22:52.177 -05 [749538] n8n01@n8n01 ERROR:  duplicate key value violates unique constraint "workflow_statistics_pkey"
2023-04-24 00:22:52.177 -05 [749538] n8n01@n8n01 DETAIL:  Key ("workflowId", name)=(12, data_loaded) already exists.
2023-04-24 00:22:52.177 -05 [749538] n8n01@n8n01 STATEMENT:  INSERT INTO "public"."workflow_statistics"("count", "latestEvent", "name", "workflowId") VALUES ($1, $2, $3, $4)
2023-04-24 00:30:40.648 -05 [749764] n8n01@n8n01 ERROR:  duplicate key value violates unique constraint "workflow_statistics_pkey"
2023-04-24 00:30:40.648 -05 [749764] n8n01@n8n01 DETAIL:  Key ("workflowId", name)=(12, data_loaded) already exists.
2023-04-24 00:30:40.648 -05 [749764] n8n01@n8n01 STATEMENT:  INSERT INTO "public"."workflow_statistics"("count", "latestEvent", "name", "workflowId") VALUES ($1, $2, $3, $4)

Hi @egorky, I am sorry for this. I’ve added this to our bug tracker once again for a fix :frowning:

2 Likes

I have the same error, is there a way to fix it?
My docker-compose file:

compose
version: "3.8"
services:
  n8n-master:
    image: n8nio/n8n:latest
    depends_on:
      - redis
      - db
    ports:
      - 5678:5678
    environment:
      - DB_POSTGRESDB_DATABASE=postgres
      - DB_POSTGRESDB_USER=postgres
      - DB_POSTGRESDB_PASSWORD=pass
      - DB_POSTGRESDB_HOST=db
      - DB_POSTGRESDB_PORT=5432
      - DB_TYPE=postgresdb
      - N8N_BASIC_AUTH_PASSWORD=123
      - N8N_BASIC_AUTH_USER=admin
      - EXECUTIONS_MODE=queue
      - EXECUTIONS_DATA_MAX_AGE=72
      - EXECUTIONS_DATA_PRUNE="true"
      - GENERIC_TIMEZONE=America/Sao_Paulo
      - N8N_BASIC_AUTH_ACTIVE="true"
      - N8N_ENCRYPTION_KEY=ae1e030c-2bb1-4175-9028-20c2d87276c5
      - QUEUE_BULL_REDIS_HOST=redis
      - QUEUE_BULL_REDIS_PORT=6379
      - N8N_DISABLE_PRODUCTION_MAIN_PROCESS=true
      - N8N_SKIP_WEBHOOK_DEREGISTRATION_SHUTDOWN=true

  n8n-worker-1:
    image: n8nio/n8n:latest
    depends_on:
      - redis
      - db
      - n8n-master
    environment:
      - N8N_ENCRYPTION_KEY=ae1e030c-2bb1-4175-9028-20c2d87276c5
      - EXECUTIONS_MODE=queue
      - QUEUE_BULL_REDIS_HOST=redis
      - QUEUE_BULL_REDIS_PORT=6379
      - DB_POSTGRESDB_DATABASE=postgres
      - DB_POSTGRESDB_USER=postgres
      - DB_POSTGRESDB_PASSWORD=pass
      - DB_POSTGRESDB_HOST=db
      - DB_POSTGRESDB_PORT=5432
      - DB_TYPE=postgresdb
    command: ["n8n", "worker"]

  n8n-worker-2:
    image: n8nio/n8n:latest
    depends_on:
      - redis
      - db
      - n8n-master
    environment:
      - N8N_ENCRYPTION_KEY=ae1e030c-2bb1-4175-9028-20c2d87276c5
      - EXECUTIONS_MODE=queue
      - QUEUE_BULL_REDIS_HOST=redis
      - QUEUE_BULL_REDIS_PORT=6379
      - DB_POSTGRESDB_DATABASE=postgres
      - DB_POSTGRESDB_USER=postgres
      - DB_POSTGRESDB_PASSWORD=pass
      - DB_POSTGRESDB_HOST=db
      - DB_POSTGRESDB_PORT=5432
      - DB_TYPE=postgresdb
    command: ["n8n", "worker", "--concurrency=5"]

  n8n-webhook-1:
    image: n8nio/n8n:latest
    depends_on:
      - redis
      - db
      - n8n-master
    ports:
      - 5679:5678
    environment:
      - N8N_ENCRYPTION_KEY=ae1e030c-2bb1-4175-9028-20c2d87276c5
      - EXECUTIONS_MODE=queue
      - QUEUE_BULL_REDIS_HOST=redis
      - QUEUE_BULL_REDIS_PORT=6379
      - DB_POSTGRESDB_DATABASE=postgres
      - DB_POSTGRESDB_USER=postgres
      - DB_POSTGRESDB_PASSWORD=pass
      - DB_POSTGRESDB_HOST=db
      - DB_POSTGRESDB_PORT=5432
      - DB_TYPE=postgresdb
    command: ["n8n", "webhook"]

  redis:
    image: redis:latest
    volumes:
      - redis-data:/data

  db:
    image: postgres:latest
    environment:
      - POSTGRES_DB=postgres
      - POSTGRES_USER=postgres
      - POSTGRES_PASSWORD=pass
    volumes:
      - db-data:/var/lib/postgresql/data

volumes:
  redis-data:
  db-data:

the system starts, but postgree gives this error when starting the process, nothing goes into the queue.

2023-04-30 13:01:11.084 UTC [25] LOG:  checkpoint complete: wrote 18 buffers (0.1%); 0 WAL file(s) added, 0 removed, 0 recycled; write=1.709 s, sync=0.002 s, total=1.715 s; sync files=18, longest=0.001 s, average=0.001 s; distance=21 kB, estimate=21 kB
2023-04-30 13:01:25.881 UTC [228] ERROR:  duplicate key value violates unique constraint "workflow_statistics_pkey"
2023-04-30 13:01:25.881 UTC [228] DETAIL:  Key ("workflowId", name)=(2, data_loaded) already exists.
2023-04-30 13:01:25.881 UTC [228] STATEMENT:  INSERT INTO "public"."workflow_statistics"("count", "latestEvent", "name", "workflowId") VALUES ($1, $2, $3, $4)
1 Like

Hi @roughriver74, this issue is currently being prioritized by @romain-n8n from our product team. I don’t have an ETA yet, but perhaps he already has more details to share here.

Solution in replace insert function with already prepaired query. in n8n/packages/cli/src/events/WorkflowStatistics.ts

1 Like

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.