Do you have issue n8n 0.236.0 since database update

Describe the problem/error/question

Since we’ve update to v0.236.0 we often have issue with the n8n main instance (UI part) with 504 error. Not sure but it seems that the issues come from database we use PostgresSQL. I wanted to downgrade but there is incompatibility now with the database.

Do you have also issue, slow interface since you upgrade to 0.236.0 ? For information, we have plenty of excutions (500) in simultaneous but it was the same before the update and we didn’t notice issue before.

Information on your n8n setup

  • n8n version: 0.236.0
  • Database (default: SQLite): PostgresSQL
  • n8n EXECUTIONS_PROCESS setting (default: own, main): queue
  • Running n8n via (Docker, npm, n8n cloud, desktop app): Docker debian
  • Operating system: RedHat

Hi @Kent1 , sorry to hear that you’re running into this!

Quick question for you - can you let me know your upgrade path to 0.236.0? Do you know what version you upgraded from? That might help us out in diagnosing this issue.

Hi @EmeraldHerald ,

Thank you for your reply.
I use this docker image : Docker

And then we just do this 3 things in a Dockerfile

RUN apt-get update && apt-get install -y ldap-utils krb5-user sasl2-bin libsasl2-2 libsasl2-modules libsasl2-modules-gssapi-mit less
RUN npm install -g [email protected]
RUN cd /usr/local/lib/node_modules/n8n && npm install mailparser stream fs jsonata pdf-table-extractor

At last, in the database I’ve some playbook that are in status new since one or two hours

Thanks @Kent1 - and which version were you using before the upgrade? Just want to check :slight_smile:

We were in 0.231.1

I’ve often this error message in my main instance :

Once again this was not the case before the update

Thanks for sharing that screenshot, @Kent1 - do you have any other logs you might be able to share, or your debug log? With that information I can get the engineering team’s eyes on this for you.

Currently I don’t see other interesting information except that the CPU is very high (but I have to admin that before I don’t know if it was the same than now or not, I didn’t check before because I haven’t issue before) .

I see this message that come (very) often, and my container n8n main (the worker seems to work fine) crash after few minutes :

Thanks so much for that information! I’m going to flag this with our engineers in order to take a deeper look into this, and I’ll be back when I have any updates :+1:

Just before the crash I’ve this error :

do you think it’s possible to downgrade ? I have to recreate the database ?

Unfortunately, 0.234 is an irreversible migration - but maybe @krynble can help troubleshoot and bring some ideas to the table in the meantime, especially if this is related to queue mode? :bowing_man:

One thing that might help @krynble would be to check the general Postgres status using SHOW max_connections; (showing how many connections are currently allowed) and SELECT * FROM pg_stat_activity; (showing which connections are currently in use), and posting them here. This should look something like this:

sorry for the second command is too large to be post here

No worries, and thanks for sharing that! You might also want to try this while waiting for @krynble : postgresql - How to increase the max connections in postgres? - Stack Overflow

Of course, make a backup of your database before you change anything :bowing_man:

1 Like


Once again thank you very much for your help. I increase this morning the number of max_connexion (set to 1000) and the shared_buffer too. I will let you know if it’s better or not.

Last question, I have this information sometimes, should I update something in n8n conf to avoid this ? :

Hi @EmeraldHerald ,

Just to let you know that I increase the max connection in postgres and it’s a bit better. I still have some issues sometimes when there is a spike of execution but the issue doesn’t come from postgres anymore.

Error seems to be an memory leak issue

1 Like

Good to know @Kent1 - I’ve let our engineering team know and they have these updates, and I’ll be back when I have updates from them :slight_smile: Sorry for the trouble in the meantime!

Since the upgrade our CPU usage is very high. We have 16 CPU and 32Go and all our CPU are at 100% :

Hi @Kent1 - I wouldn’t have any answers yet, but you might want to avoid excessive database load by avoiding storing all of your execution data.

You’ll need to enable data pruning manually, but you can find some information on how to do that here:

That might help in the meantime! I’m not sure it’ll be a fix, but could definitely help :slight_smile:

1 Like