Performance problems with the 1.105.x and 1.106.0 releases

Hello,

I would like to report a performance issue I’ve encountered with n8n version 1.105.2: during a demanding workflow, this release seems to “saturate” the instance. I set up a monitoring tool and observed numerous timeouts: [n8n] [:red_circle: Down] timeout of 48000ms exceeded.

To confirm that the issue was indeed related to this release, I downgraded to version 1.104.2 while keeping everything else identical on my server, and ran the same workflow. I was able to confirm that the problem was indeed caused by version 1.105.2. Same issue occurs with version 1.105.1.

I thought it would be useful to report this for those who help maintain this fantastic tool :wink:

Best regards to everyone

Information on your n8n setup

  • n8n version: 1.105.2 and 1.104.2
  • Database (default: SQLite): Postgres
  • n8n EXECUTIONS_PROCESS setting (default: own, main): default
  • Running n8n via (Docker, npm, n8n cloud, desktop app): Docker
  • Operating system: Ubuntu
1 Like

Hello, I’ve just tested the 1.106.0 release and I can confirm there is the same problem than with the 1.105.x versions : timeout, difficulties to write on the disk (it seems).

At the end, a workflow takes 6m 24.547s on the 1.106.0 and 2m 39.617s on the 1.104.2 !

I don’t know what parameter was changed but I assure it makes a mess !
I can try to give some feedback to the Dev team if it can help (even if I am not an expert like you are :wink: )

I give the n8n docker-compose part :

n8n :
build:
context: ./n8n
dockerfile: Dockerfile
container_name: n8n
restart: always
environment:
- N8N_HOST=xxxxxxx
- N8N_OAUTH_CALLBACK_URL=xxxxxx
- N8N_PORT=5678
- N8N_PROTOCOL=https
- NODE_ENV=production
- WEBHOOK_URL=xxxxx
- GENERIC_TIMEZONE=Europe/Paris
- N8N_RUNNERS_ENABLED=true
- N8N_METRICS=true
- N8N_COMMUNITY_PACKAGES_ALLOW_TOOL_USAGE=true
- N8N_ENCRYPTION_KEY=xxxxxx
- NODE_OPTIONS=–max-old-space-size=8192
- N8N_PAYLOAD_SIZE_MAX=524288000
- DB_TYPE=postgresdb
- DB_POSTGRESDB_HOST=postgres
- DB_POSTGRESDB_PORT=5432
- DB_POSTGRESDB_DATABASE=n8n
- DB_POSTGRESDB_USER=xxxxxx
- DB_POSTGRESDB_PASSWORD=xxxxxx
volumes:

  • n8n_data:/home/node/.n8n
  • ./local-files:/files
    networks:
  • main_network
    depends_on:
    postgres:
    condition: service_healthy

It works like a charm in versions up to 1.104…

Best regards

So as far as no one replies, I understand I’m the only one concerned :wink:
But if I could show my problem to a dev team member, it would be nice in order to help me to find a solution.
Thanks a lot

Hello @zlebandit .
I am facing the same issue.

same here and can’t rollback because of db schema changes :confused:

(rollback does work but editor wont work anymore)

Hello @Jannispkz , and did you find a way to correct that ?

For now, I can only work with the 1.104.x release. Everything above it generates crashes Filter and Edit Field nodes seem to overtake all the ressources of the system.

I give this precision : it seems that it concerns the nodes where “Always Output Data” is checked or when the node has to refer to a node that is not just before.

If a dev has a little bit of time, I can give access to my ressources in order to reproduce it.

Best regards

I switched back to 1.104.2 but since the editor broke I booted up another docker container that runs 1.105.3 which I use with the same postgres db so I can access the editor through that until this is fixed

Thank you for bringing this to our attention. Could you share some more details about the workflows you have? Ideally share it (with sensitive data removed) or at least tell which nodes you are using.

Hello, I’ve got this message :An error occurred: Body is limited to 32000 characters; you entered 113190.

The JSON is a bit too big :wink:

How can I send it to you ?

You could unpin any data you have in the workflow first, or manually remove the pinned data before sharing it.

I checked, everything is unpined but it does not fit :wink:

Sorry

I made a WeTransfert : My Workflow link

I’m not 100 % sure that I pointed the good nodes.
Also, the database in Baserow has 120 items. But I tested with just 4 items, even with one and it’s the same problem : the server seems to be saturated.
I also testes without the loop, with just one item, in order to see if the loop was the problem. It is not.
With one or 4 items, the workflow will end. With 120 items, the node “Scap url” that calls a subworkfow will stop because of timeout.

If necessary, I give you access to my n!8n instance. Tell me if necessary : I will put the last release and will give you my id and pass.

Best regards

Hi all, can somebody experiencing this issue provide minimal workflow to reproduce it? It does not need to contain data, I’m interested mostly in a flow and nodes included

Hi @zlebandit,

Is this workflow slow in a production run or when testing in the UI? Do you happen to have a screenshot of the execution log?

Hi @Jon , in both : production run and UI testing. In production mode multiple timeout occur, like in UI mode. I give you a screenshot of my Telegram account that recieves messages from uptime kuma.

A precision : if you are in the UI and when you click to stop the workflow, it freezes everything. You have to close n8n, and when you relauch n8n, the workflow did not stop

Hi @mk_n8n , I tried but I can’t reproduce on a small workflow.


I encountered the same issue. Image 1 shows the time spent running the switch node in version 1.105.2, while image 2 shows the time spent after rolling back to version 1.104.2. Clearly, the older version takes less time.

image

Hi all, just to say that the 1.107.0 does not solve the problem at all unfortunately ;(

I will downgrade my instance to the 1.104.2. Bizarre bizarre. Hope a solution will be found

Best regards

Hello, here the same.

After upgraded last friday to [email protected] all are problems, i cannot access to Workflows console, and many workflows not working,

any newest version has the same problem.

I never had problems before.

Sadly because i am paying in cloud pro 50k, i cannot select any older version (if i were self hosted with docker i could install any older version), and since friday i have problems in my n8n cloud corporative, with no solutions.

Hi everyone,
I run n8n in queue mode inside a Docker container, with PostgreSQL in a separate Docker container.

There are around 80 active workflows running in n8n.

Last night I upgraded from 1.104.1 to 1.106.3. My main workload starts around 9 AM, and I immediately noticed a huge drop in performance — so much that n8n completely stopped connecting to PostgreSQL.

I rolled back to 1.104.1 and then found this thread.

Here’s an example of the errors I was getting:

error Error: timeout exceeded when trying to connect
at /usr/local/lib/node_modules/n8n/node_modules/.pnpm/[email protected][email protected]/node_modules/pg-pool/index.js:45:11
at runNextTicks (node:internal/process/task_queues:65:5)
at listOnTimeout (node:internal/timers:549:9)
at processTimers (node:internal/timers:523:7)
at PostgresDriver.obtainMasterConnection (…)
at PostgresQueryRunner.query (…)
at DataSource.query (…)
at WorkflowStatisticsRepository.upsertWorkflowStatistics (…)
at WorkflowStatisticsService.workflowExecutionCompleted (…)