Nodes take up to 3-4 seconds to open

I build a telegram bot, which involves several tens of node.
And the node might take 3-4 seconds to open. I never had such issues. Does it mean I have to change the VPS configuration?

The same for inserting JSON data and even closing the windows.

My config is Ubuntu 20.04, 1 CPU, 1 Gb of RAM

Not sure if it is really needed. But 1 CPU 1 GB RAM isn’t a lot. So yes this might be the case.
Do you happen to use the Default database(SQLight)? This might also be the cause. I noticed a nice improvement when switching to postgres.

Thank you. Yeah, I use SQLite. Where can I read about moving to Postgres?

Hi @artildo, it looks like you are using the Notion node quite a bit in your workflow. This is a node that relies heavily on expressions internally which might be what slows things down here.

This is on our internal bug tracker as N8N-2628, but I don’t have an ETA for a resolution right now I am afraid. We’ll make sure to keep you updated.

1 Like

The docs for alternate databases: Docker - n8n Documentation

You will need to export the database or the workflows and import them into the new database to not lose your work.
I did this using Github, moving all my workflows there and then moving them back into the new environment.

@MutedJam Changing database won’t help with this then. Would an increase in specs help @artildo out for now?

Tbh, I don’t think so. But this might be one for @mutdmour to confirm.

@MutedJam Yeah, it might be the case with Notion. Thank you. Hope it will get fixed
@BramKn Thank you for the help


Unfortunately, it’s a UI issue… So increasing specs won’t help.


be good to see a guide on this steps you took, i tried it once and it wasn’t as clear as i had hoped so went back to the deafult, much prefer a more robust database as i think the default cant handle the amount of stuff coming in and out of it any more.

@RedPacketSec Assuming Docker this is what I did earlier this week to move from SQLite to Postgres, There may be better ways to do it but this worked for me. The only catch was I had to make my user again for user management so I suspect if you were using multiple users you may ran into an issue.

# Access the container
cd /home/node/.n8n
su node
mkdir temp && cd temp
n8n export:workflow --output=wf.json
n8n export:credentials --output=cred.json
# Stop the container

# Update env variables to use new database
# Start the container

# Access Container again
cd /home/node/.n8n/temp
su node
n8n import:workflow --input=file.json
n8n import:credentials --input=file.json
cd ../ && rm -rf temp
# Restart container

nice will give that a go

n8n export:workflow --all --output=wf.json
1 Like

Same with credential… Missed that when typing it up :slight_smile:

Credentials could not be decrypted. The likely reason is that a different “encryptionKey” was used to encrypt the data.

other than this the data did copy over

EDIT: folder mapping was buggered for some reason so fixed that and now all good.