I actually saw a bunch of similar log lines on my own instance as well but haven’t paid much attention to it tbh. I reckon the workflow_statistics table is related to the paid plans we have recently launched (which I am not using, so I have simply ignored these log entries until now).
That said, I know it can be irritating to have ERRORs in any logs. It seems to be me this is the PR introducing the table, I’ll check internally if anyone knows what might be causing this problem and how to resolve it.
Hi @MatthieuParis, it seems this was originally done intentionally because the TypeORM library does not have support for the “on conflict increment” logic used here. So instead, n8n uses a try → catch logic, meaning it is expected to have the respective query error.
The team is currently discussing alternatives though, so this might change in the future.
I am surprised this leads to general slowness though and was not able to reproduce this on my end. Which operation exactly is slow for you?
My original version is 0.182-debian without any problem.
the issue happened after we upgraded to 0.213 (updated all coding in function node to code node)
we also updated to latest version > 0.214 > 0.214.2
It’s when I want to access to workflow history to track error on it.
The loader appears, and the app is freezing and if I want to access to another items (like workflow list or credentials) the app is very slow.
We upgraded to 0.214.2, I have to check if the problem is solved with this version .
It seems to be related to using postgres in my case.
Compared the behavior of the same workflow in a local non-db docker-compose stack.
The remote stack uses postgres and runs behind a haproxy, while the local stack uses no db container and runs behind a traefik reverse proxy (I noticed the long and failing executions and assumed that maybe the reverse proxy also plays a role here).
The postgres container logs:
n8n-postgres-1 | 2023-02-13 09:52:05.647 UTC  LOG: database system is ready to accept connections
n8n-postgres-1 | 2023-02-13 09:52:25.146 UTC  ERROR: permission denied to create extension "uuid-ossp"
n8n-postgres-1 | 2023-02-13 09:52:25.146 UTC  HINT: Must be superuser to create this extension.
n8n-postgres-1 | 2023-02-13 09:52:25.146 UTC  STATEMENT: CREATE EXTENSION IF NOT EXISTS "uuid-ossp"
n8n-postgres-1 | 2023-02-13 09:52:48.152 UTC  ERROR: permission denied to create extension "uuid-ossp"
n8n-postgres-1 | 2023-02-13 09:52:48.152 UTC  HINT: Must be superuser to create this extension.
n8n-postgres-1 | 2023-02-13 09:52:48.152 UTC  STATEMENT: CREATE EXTENSION IF NOT EXISTS "uuid-ossp"
n8n-postgres-1 | 2023-02-13 09:52:48.690 UTC  ERROR: duplicate key value violates unique constraint "workflow_statistics_pkey"
n8n-postgres-1 | 2023-02-13 09:52:48.690 UTC  DETAIL: Key ("workflowId", name)=(26, data_loaded) already exists.
n8n-postgres-1 | 2023-02-13 09:52:48.690 UTC  STATEMENT: INSERT INTO "public"."workflow_statistics"("count", "latestEvent", "name", "workflowId") VALUES ($1, $2, $3, $4)
n8n-postgres-1 | 2023-02-13 09:52:48.975 UTC  ERROR: duplicate key value violates unique constraint "workflow_statistics_pkey"
n8n-postgres-1 | 2023-02-13 09:52:48.975 UTC  DETAIL: Key ("workflowId", name)=(26, data_loaded) already exists.
@jon executions run long and don’t finish very often.
A workaround is to (try to) stop them, click “Workflows” on the left to exit the workflow editor and go back in, the re-execute. This is rather problematic as it slows down the development process.
I think in my case I could easily live without pgsql, only a handful of rather small workflows so far. But I don’t know if there is an easy way to “downgrade”. Sure, I could copy-paste the workflows maybe. Would be nice to have a working howto. Or a fixed stack with postgresql thanks
My n8n has hundreds of thousands of executions per day. I believe what @MutedJam said was the cause, that n8n logs the ‘error’ whenever an execution occurs and write to an existing row, instead of doing something like ‘increment if exists’.
I can’t tell if my n8n’s performance is affected, but the log file is getting quite crazy lol. It’d be great if you guys could find a solution to this. Thank you for the hard work!
I’m with you here, this upsets me every time I see it. Up to the point where I now have one n8n-specific Postgres instance for which I can simply throw away most logs.
Unfortunately I haven’t seen any roadmap items that would suggest this is being fixed anytime soon.
That said, the n8n performance shouldn’t be affected as these errors are logged by Postgres rather than n8n. Perhaps you might want to look at the logs of just n8n for the time being and consider clearing the Postgres logs more frequently?