Duplicate key value violates unique constraint "workflow_statistics_pkey"

Hi community & n8n team
Thx for this amazing product.
Since friday, we occure a problem : n8n is very slow, we can’t have access to workflow executions details, and we saw in logs these logs:

2023-01-30 08:46:40.121 UTC [811] ERROR: duplicate key value violates unique constraint “workflow_statistics_pkey”
2023-01-30 08:46:40.121 UTC [811] DETAIL: Key (“workflowId”, name)=(8, data_loaded) already exists.
2023-01-30 08:46:40.121 UTC [811] STATEMENT: INSERT INTO “public”.“workflow_statistics”(“count”, “latestEvent”, “name”, “workflowId”) VALUES ($1, $2, $3, $4)

we update to the last version this morning, with no effective changes.

Information on your n8n setup

  • **n8n version: 0.213.0
  • **Database you’re using (default: SQLite): POSTGRESQL 14
  • **Running n8n via [Docker, npm, n8n.cloud, desktop app]: Docker
1 Like

Hi @MatthieuParis, I am sorry to hear you’re having trouble.

I actually saw a bunch of similar log lines on my own instance as well but haven’t paid much attention to it tbh. I reckon the workflow_statistics table is related to the paid plans we have recently launched (which I am not using, so I have simply ignored these log entries until now).

That said, I know it can be irritating to have ERRORs in any logs. It seems to be me this is the PR introducing the table, I’ll check internally if anyone knows what might be causing this problem and how to resolve it.

Hi @MatthieuParis, it seems this was originally done intentionally because the TypeORM library does not have support for the “on conflict increment” logic used here. So instead, n8n uses a try → catch logic, meaning it is expected to have the respective query error.

The team is currently discussing alternatives though, so this might change in the future.

I am surprised this leads to general slowness though and was not able to reproduce this on my end. Which operation exactly is slow for you?

Hi~
I have had the same error recently, and the N8N became unstable, some triggered workflow executes forever until I stop it manually.

2023-02-08 09:17:20.541 GMT [6465] DETAIL: Key (“workflowId”, name)=(3, data_loaded) already exists.
2023-02-08 09:17:20.541 GMT [6465] STATEMENT: INSERT INTO “public”.“workflow_statistics”(“count”, “latestEvent”, “name”, “workflowId”) VALUES ($1, $2, $3, $4)
2023-02-08 09:17:20.541: [unknown] pid 158: LOG: Error message from backend: DB node id: 1 message: “duplicate key value violates unique constraint “workflow_statistics_pkey””
2023-02-08 09:17:20.803 GMT [6465] ERROR: duplicate key value violates unique constraint “workflow_statistics_pkey”
2023-02-08 09:17:20.803: [unknown] pid 158: LOG: Error message from backend: DB node id: 1 message: “duplicate key value violates unique constraint “workflow_statistics_pkey””
2023-02-08 09:17:20.803 GMT [6465] DETAIL: Key (“workflowId”, name)=(3, data_loaded) already exists.
2023-02-08 09:17:20.803 GMT [6465] STATEMENT: INSERT INTO “public”.“workflow_statistics”(“count”, “latestEvent”, “name”, “workflowId”) VALUES ($1, $2, $3, $4)
2023-02-08 09:17:21.224 GMT [6465] ERROR: duplicate key value violates unique constraint “workflow_statistics_pkey”
2023-02-08 09:17:21.224 GMT [6465] DETAIL: Key (“workflowId”, name)=(3, data_loaded) already exists.

Information on your n8n setup

My original version is 0.182-debian without any problem.
the issue happened after we upgraded to 0.213 (updated all coding in function node to code node)
we also updated to latest version > 0.214 > 0.214.2
issue persists.
Thanks.

I see the same here. Workflows running a long time, not working reliably.

image: n8nio/n8n:0.214.3 on docker

It’s when I want to access to workflow history to track error on it.
The loader appears, and the app is freezing and if I want to access to another items (like workflow list or credentials) the app is very slow.

We upgraded to 0.214.2, I have to check if the problem is solved with this version .

Thx for your support

It seems to be related to using postgres in my case.
Compared the behavior of the same workflow in a local non-db docker-compose stack.

The remote stack uses postgres and runs behind a haproxy, while the local stack uses no db container and runs behind a traefik reverse proxy (I noticed the long and failing executions and assumed that maybe the reverse proxy also plays a role here).

The postgres container logs:

n8n-postgres-1          | 2023-02-13 09:52:05.647 UTC [1] LOG:  database system is ready to accept connections
n8n-postgres-1          | 2023-02-13 09:52:25.146 UTC [62] ERROR:  permission denied to create extension "uuid-ossp"
n8n-postgres-1          | 2023-02-13 09:52:25.146 UTC [62] HINT:  Must be superuser to create this extension.
n8n-postgres-1          | 2023-02-13 09:52:25.146 UTC [62] STATEMENT:  CREATE EXTENSION IF NOT EXISTS "uuid-ossp"
n8n-postgres-1          | 2023-02-13 09:52:48.152 UTC [111] ERROR:  permission denied to create extension "uuid-ossp"
n8n-postgres-1          | 2023-02-13 09:52:48.152 UTC [111] HINT:  Must be superuser to create this extension.
n8n-postgres-1          | 2023-02-13 09:52:48.152 UTC [111] STATEMENT:  CREATE EXTENSION IF NOT EXISTS "uuid-ossp"
n8n-postgres-1          | 2023-02-13 09:52:48.690 UTC [72] ERROR:  duplicate key value violates unique constraint "workflow_statistics_pkey"
n8n-postgres-1          | 2023-02-13 09:52:48.690 UTC [72] DETAIL:  Key ("workflowId", name)=(26, data_loaded) already exists.
n8n-postgres-1          | 2023-02-13 09:52:48.690 UTC [72] STATEMENT:  INSERT INTO "public"."workflow_statistics"("count", "latestEvent", "name", "workflowId") VALUES ($1, $2, $3, $4)
n8n-postgres-1          | 2023-02-13 09:52:48.975 UTC [72] ERROR:  duplicate key value violates unique constraint "workflow_statistics_pkey"
n8n-postgres-1          | 2023-02-13 09:52:48.975 UTC [72] DETAIL:  Key ("workflowId", name)=(26, data_loaded) already exists.

Hey @sgw,

Are you seeing failures for n8n to load or just the error lines in the log file? Looking at the history on this thread it was confirmed to be postgres and was down to a try / catch being used.

@jon executions run long and don’t finish very often.

A workaround is to (try to) stop them, click “Workflows” on the left to exit the workflow editor and go back in, the re-execute. This is rather problematic as it slows down the development process.

I think in my case I could easily live without pgsql, only a handful of rather small workflows so far. But I don’t know if there is an easy way to “downgrade”. Sure, I could copy-paste the workflows maybe. Would be nice to have a working howto. Or a fixed stack with postgresql :wink: thanks

You could use the CLI to export then update the env options and import them but that is not ideal, If it is workflows running long though do you have a lot of data in the execution_entity table?

Nothing running long here. No execution history at all in the GUI right now.

There was one test workflow with a webhook activated. Disabled that one now. Will look closer later, video call now.

Same problem here:

n8n-postgressdb | 2023-02-15 15:37:09.169 UTC [89404] ERROR:  duplicate key value violates unique constraint "workflow_statistics_pkey"
n8n-postgressdb | 2023-02-15 15:37:09.169 UTC [89404] DETAIL:  Key ("workflowId", name)=(28, data_loaded) already exists.
n8n-postgressdb | 2023-02-15 15:37:09.169 UTC [89404] STATEMENT:  INSERT INTO "public"."workflow_statistics"("count", "latestEvent", "name", "workflowId") VALUES ($1, $2, $3, $4)
n8n-postgressdb | 2023-02-15 15:37:13.978 UTC [89404] ERROR:  duplicate key value violates unique constraint "workflow_statistics_pkey"
n8n-postgressdb | 2023-02-15 15:37:13.978 UTC [89404] DETAIL:  Key ("workflowId", name)=(28, data_loaded) already exists.
n8n-postgressdb | 2023-02-15 15:37:13.978 UTC [89404] STATEMENT:  INSERT INTO "public"."workflow_statistics"("count", "latestEvent", "name", "workflowId") VALUES ($1, $2, $3, $4)

It started after I upgrade N8N from version 0.209.2 to 0.215.2 (yesterday)

**n8n version: 0.215.2
**Database you’re using: postgres:14.5
**Running n8n via: Docker

Until now I dont notice any performance change (40 active workflows),

should I downgrade N8N version?

Is n8n running slowly or not working? Looking at the post from @MutedJam this sort of thing is somewhat expected at the moment.

Same issue here. n8n v.221.2 & postgres-ha.

My n8n has hundreds of thousands of executions per day. I believe what @MutedJam said was the cause, that n8n logs the ‘error’ whenever an execution occurs and write to an existing row, instead of doing something like ‘increment if exists’.

I can’t tell if my n8n’s performance is affected, but the log file is getting quite crazy lol. It’d be great if you guys could find a solution to this. Thank you for the hard work!

Hi @Jiuhk, welcome to the community!

I’m with you here, this upsets me every time I see it. Up to the point where I now have one n8n-specific Postgres instance for which I can simply throw away most logs.

Unfortunately I haven’t seen any roadmap items that would suggest this is being fixed anytime soon.

That said, the n8n performance shouldn’t be affected as these errors are logged by Postgres rather than n8n. Perhaps you might want to look at the logs of just n8n for the time being and consider clearing the Postgres logs more frequently?

1 Like

I am using 0.222.1 and postgres 13 and this error still exists. I don’t believe this is ever gonna be fix hehehe

2023-04-07 00:51:07.929 UTC [125183] n8n01@n8n01 ERROR:  duplicate key value violates unique constraint "workflow_statistics_pkey"
2023-04-07 00:51:07.929 UTC [125183] n8n01@n8n01 DETAIL:  Key ("workflowId", name)=(10, data_loaded) already exists.
2023-04-07 00:51:07.929 UTC [125183] n8n01@n8n01 STATEMENT:  INSERT INTO "public"."workflow_statistics"("count", "latestEvent", "name", "workflowId") VALUES ($1, $2, $3, $4)

and I do have executions that never stop unless I restart the process manually. By the way, my N8N is on queue mode.

Hi @egorky, welcome to the community! Could you try upgrading to [email protected]? This should contain a fix for the problem.

thanks a lot @MutedJam , I have upgraded it and it does not happen anymore.

sorry, I was watching the wrong log file. This is still happening. I am using n8n version 0.225.1 and postgreSQL 13.10.1

2023-04-24 00:22:30.149 -05 [749501] n8n01@n8n01 ERROR:  duplicate key value violates unique constraint "workflow_statistics_pkey"
2023-04-24 00:22:30.149 -05 [749501] n8n01@n8n01 DETAIL:  Key ("workflowId", name)=(12, data_loaded) already exists.
2023-04-24 00:22:30.149 -05 [749501] n8n01@n8n01 STATEMENT:  INSERT INTO "public"."workflow_statistics"("count", "latestEvent", "name", "workflowId") VALUES ($1, $2, $3, $4)
2023-04-24 00:22:42.210 -05 [749527] n8n01@n8n01 ERROR:  duplicate key value violates unique constraint "workflow_statistics_pkey"
2023-04-24 00:22:42.210 -05 [749527] n8n01@n8n01 DETAIL:  Key ("workflowId", name)=(12, data_loaded) already exists.
2023-04-24 00:22:42.210 -05 [749527] n8n01@n8n01 STATEMENT:  INSERT INTO "public"."workflow_statistics"("count", "latestEvent", "name", "workflowId") VALUES ($1, $2, $3, $4)
2023-04-24 00:22:52.177 -05 [749538] n8n01@n8n01 ERROR:  duplicate key value violates unique constraint "workflow_statistics_pkey"
2023-04-24 00:22:52.177 -05 [749538] n8n01@n8n01 DETAIL:  Key ("workflowId", name)=(12, data_loaded) already exists.
2023-04-24 00:22:52.177 -05 [749538] n8n01@n8n01 STATEMENT:  INSERT INTO "public"."workflow_statistics"("count", "latestEvent", "name", "workflowId") VALUES ($1, $2, $3, $4)
2023-04-24 00:30:40.648 -05 [749764] n8n01@n8n01 ERROR:  duplicate key value violates unique constraint "workflow_statistics_pkey"
2023-04-24 00:30:40.648 -05 [749764] n8n01@n8n01 DETAIL:  Key ("workflowId", name)=(12, data_loaded) already exists.
2023-04-24 00:30:40.648 -05 [749764] n8n01@n8n01 STATEMENT:  INSERT INTO "public"."workflow_statistics"("count", "latestEvent", "name", "workflowId") VALUES ($1, $2, $3, $4)

Hi @egorky, I am sorry for this. I’ve added this to our bug tracker once again for a fix :frowning:

2 Likes