[UPDATE] Crashing/freezing up after updating to ver 1

Describe the problem/error/question

  1. N8N database was taking up about 70gb of space from execution data
  2. decided to update the docker compose file with added env variables to clear the execution after x days etc etc
  3. before launching I did a docker pull (whoops) and it updated to the very latest version of n8n, from what i read its requiring a database migration. and my 70gb of useless data is filling up my entire server drive trying to run the migration queries. (50gb free space)

What is the error message (if any)?

execution time: 1032

query is slow: INSERT INTO "TMP_execution_entity"  SELECT * FROM "execution_entity" LIMIT 57550, 10

execution time: 1632

query is slow: INSERT INTO "TMP_execution_entity"  SELECT * FROM "execution_entity" LIMIT 57580, 10

execution time: 1665

query is slow: INSERT INTO "TMP_execution_entity"  SELECT * FROM "execution_entity" LIMIT 57590, 10

execution time: 1087

query is slow: INSERT INTO "TMP_execution_entity"  SELECT * FROM "execution_entity" LIMIT 58270, 10

execution time: 1480

query is slow: INSERT INTO "TMP_execution_entity"  SELECT * FROM "execution_entity" LIMIT 58790, 10

execution time: 1260

query is slow: INSERT INTO "TMP_execution_entity"  SELECT * FROM "execution_entity" LIMIT 58800, 10

execution time: 1716

query is slow: INSERT INTO "TMP_execution_entity"  SELECT * FROM "execution_entity" LIMIT 58810, 10

execution time: 1149

query is slow: INSERT INTO "TMP_execution_entity"  SELECT * FROM "execution_entity" LIMIT 59610, 10

execution time: 2548

query is slow: INSERT INTO "TMP_execution_entity"  SELECT * FROM "execution_entity" LIMIT 59720, 10

execution time: 2438

query is slow: INSERT INTO "TMP_execution_entity"  SELECT * FROM "execution_entity" LIMIT 59730, 10

execution time: 2769

query is slow: INSERT INTO "TMP_execution_entity"  SELECT * FROM "execution_entity" LIMIT 59740, 10

execution time: 1510

query is slow: INSERT INTO "TMP_execution_entity"  SELECT * FROM "execution_entity" LIMIT 59790, 10

execution time: 1058

query is slow: INSERT INTO "TMP_execution_entity"  SELECT * FROM "execution_entity" LIMIT 60210, 10

execution time: 1374

query is slow: INSERT INTO "TMP_execution_entity"  SELECT * FROM "execution_entity" LIMIT 60220, 10

execution time: 2143

query is slow: INSERT INTO "TMP_execution_entity"  SELECT * FROM "execution_entity" LIMIT 60300, 10

execution time: 1420

query is slow: INSERT INTO "TMP_execution_entity"  SELECT * FROM "execution_entity" LIMIT 60510, 10

execution time: 1172

query is slow: INSERT INTO "TMP_execution_entity"  SELECT * FROM "execution_entity" LIMIT 61410, 10

execution time: 1289

query is slow: INSERT INTO "TMP_execution_entity"  SELECT * FROM "execution_entity" LIMIT 61420, 10

execution time: 1210

query is slow: INSERT INTO "TMP_execution_entity"  SELECT * FROM "execution_entity" LIMIT 61560, 10

execution time: 2571

query is slow: INSERT INTO "TMP_execution_entity"  SELECT * FROM "execution_entity" LIMIT 61570, 10

execution time: 2404

query is slow: INSERT INTO "TMP_execution_entity"  SELECT * FROM "execution_entity" LIMIT 61580, 10

execution time: 1305

query is slow: INSERT INTO "TMP_execution_entity"  SELECT * FROM "execution_entity" LIMIT 62300, 10

execution time: 2392

query is slow: INSERT INTO "TMP_execution_entity"  SELECT * FROM "execution_entity" LIMIT 62410, 10

execution time: 2298

query is slow: INSERT INTO "TMP_execution_entity"  SELECT * FROM "execution_entity" LIMIT 62420, 10

execution time: 2082

query is slow: INSERT INTO "TMP_execution_entity"  SELECT * FROM "execution_entity" LIMIT 62430, 10

execution time: 1732

query is slow: INSERT INTO "TMP_execution_entity"  SELECT * FROM "execution_entity" LIMIT 63320, 10

execution time: 1010

query is slow: INSERT INTO "TMP_execution_entity"  SELECT * FROM "execution_entity" LIMIT 63330, 10

execution time: 1054

query is slow: INSERT INTO "TMP_execution_entity"  SELECT * FROM "execution_entity" LIMIT 63350, 10

execution time: 1144

query is slow: INSERT INTO "TMP_execution_entity"  SELECT * FROM "execution_entity" LIMIT 63410, 10

execution time: 1187

query is slow: INSERT INTO "TMP_execution_entity"  SELECT * FROM "execution_entity" LIMIT 64720, 10

execution time: 1731

query is slow: INSERT INTO "TMP_execution_entity"  SELECT * FROM "execution_entity" LIMIT 64730, 10

execution time: 1097

query is slow: INSERT INTO "TMP_execution_entity"  SELECT * FROM "execution_entity" LIMIT 64750, 10

execution time: 1991

query is slow: INSERT INTO "TMP_execution_entity"  SELECT * FROM "execution_entity" LIMIT 64760, 10

execution time: 2087

query is slow: INSERT INTO "TMP_execution_entity"  SELECT * FROM "execution_entity" LIMIT 64770, 10

execution time: 2023

query is slow: INSERT INTO "TMP_execution_entity"  SELECT * FROM "execution_entity" LIMIT 64900, 10

execution time: 1408

query is slow: INSERT INTO "TMP_execution_entity"  SELECT * FROM "execution_entity" LIMIT 65280, 10

execution time: 1581

query is slow: INSERT INTO "TMP_execution_entity"  SELECT * FROM "execution_entity" LIMIT 66130, 10

execution time: 1886

query is slow: INSERT INTO "TMP_execution_entity"  SELECT * FROM "execution_entity" LIMIT 66150, 10

execution time: 1004

query is slow: INSERT INTO "TMP_execution_entity"  SELECT * FROM "execution_entity" LIMIT 66230, 10

execution time: 1200

query is slow: INSERT INTO "TMP_execution_entity"  SELECT * FROM "execution_entity" LIMIT 66660, 10

execution time: 1365

query is slow: INSERT INTO "TMP_execution_entity"  SELECT * FROM "execution_entity" LIMIT 67170, 10

execution time: 2356

query is slow: INSERT INTO "TMP_execution_entity"  SELECT * FROM "execution_entity" LIMIT 67180, 10

execution time: 1520

query is slow: INSERT INTO "TMP_execution_entity"  SELECT * FROM "execution_entity" LIMIT 67860, 10

execution time: 1295

query is slow: INSERT INTO "TMP_execution_entity"  SELECT * FROM "execution_entity" LIMIT 68030, 10

execution time: 2060

query is slow: INSERT INTO "TMP_execution_entity"  SELECT * FROM "execution_entity" LIMIT 68040, 10

execution time: 1817

query is slow: INSERT INTO "TMP_execution_entity"  SELECT * FROM "execution_entity" LIMIT 68050, 10

execution time: 2339

Last session crashed

n8n ready on 0.0.0.0, port 5678

query is slow: SELECT * FROM "sqlite_master" WHERE "type" = 'table' AND "name" = 'migrations'

execution time: 1615

Migrations in progress, please do NOT stop the process.

query is slow: INSERT INTO "TMP_execution_entity"  SELECT * FROM "execution_entity" LIMIT 12060, 10

Please share your workflow

(Select the nodes on your canvas and use the keyboard shortcuts CMD+C/CTRL+C and CMD+V/CTRL+V to copy and paste the workflow.)

Share the output returned by the last node

So my question is How can i access the sql db file and drop the execution data table, without losing all my workflows/credentials/users etc??

I can access the container using sh terminal but not familiar about what to install into the container to run and open the database for sqlite?

I have read the other article where the dude solved the same problem but didnt get into the finer details, maybe he wasnt a beginner.

Also i cant move the db file out and work with it because the VM just doesnt have that 70gb to play with.

Information on your n8n setup

  • n8n version:
  • Database (default: SQLite):
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app):
  • Operating system:

I’m not sure if you are aware, but the n8n node now allows you to query executions and delete executions. You could create a workflow that queries your old executions, and then sends to be deleted automatically. I may do something like that soon.

1 Like

So when i try to get into N8N it will just return some string “n8n is startup up please wait” or something like this, so i cant actually get into n8n right now. which is why i need a way to get to the database a run a delete query on the whole table and teach it a lesson in tough love

Okay made some progress but now stuck

  1. used portainer to connect in bin/sh as root
  2. apk add sqlite
  3. move to the database directory
  4. run sqlite3 then .open database.sqlite
  5. run .tables to list the tables
  6. Tried to run backup to backup the db but it also says locked

When i get to this point it just says that the database is locked.

how can i stop the migration process? otherwise ill have to try to pull the database out from the container and bring it back? (100gb now)

@Jon help please!

Ouch, sorry to hear that! Wish I could be more help with that!

1 Like

so its always locked so ive copied the vm as a backup and also so i can move the vm to a machine with more hdd space, i plan to do it like this which is in my opinion the unsophisticated caveman method

  1. dowmload the ovf and vmdk files from the server
  2. put them on another machine with more hdd space 180gb to 300gb
  3. run up the vm and boot up the docker container
  4. copy the sqlite database out of the container onto the local machine, edit the database in the local machine and then replace the database and hope that the migration flies through with ease,

otherwise if that’s not working will just run up the vm and give it more space and let it complete the migration overnight, then boot up and clean the execution logs and then export the container as a tarball and bring it back to the old machine again.

there must be a simple way to just stop the migration without stopping the container so that i can delete the contents of the execution table.

@jan what do you think?

Hey @Josh-Ghazi,

You don’t need to tag us we see the messages :slightly_smiling_face:

So with SQLite if n8n is running you shouldn’t try to connect to it as only one thing should access it at a time to prevent any issues.

There is no way to stop the migration and to be honest I am not sure what would happen if you did just stop n8n while it is upgrading so I would maybe leave it to finish rather than risk doing any damage.

If you have a backup from before you started though you could try stopping the container, Using sqlite from the host OS to connect to the database in the volume and then do a clean up and start it up again to see what happens.

Hi Jon thank you so much for your response, im sorry for bothering you guys,

Now the the folder that gets mounted is called .n8n and there is a database in there but its empty like only 164kb but when I run the docker compose file it mounts the folder as /home/node/.n8n but there is other files in there including the 100gb sqlite database. im trying to see if there is anyway of taking this database out.

right now ive given the new machine an extra 150gb for this vm i hope that it will be enough to complete the migration. fingers crossed!!

Hey @Josh-Ghazi,

Can you show me the 100GB file or how you have the mount configured in portainer? It sounds like your mount is not correct and n8n isn’t using the sqlite database you found in that .n8n directory.

1 Like

This is the docker compose file that i work off, i did try to make one in portainer but that would not load all saved database data,

so for some reason when i run from docker-compose it loads up with my fat workflow database

this is in the container


this shows docker not taking up that much space (looks normal)

how about downgrading to the latest version that does not require migration, let that run and clear off the execution logs then after that, upgrade to the latest version?

will it work or will the db be corrupted and play up?

if you think it might work could you please let me which which version for docker would be the ideal version for me to run??

I suspect the home directory in your compose directory isn’t for the user you are using for your desktop session, I would probably check under /home/ubantu-dev/.n8n or /root/.n8n to see if there is anything there.

If you downgrade the migration still won’t finish so when you upgrade it may throw an error. As you are using ESXI though did you take a snapshot before you did the upgrade / accidental change?

1 Like

UPDATE

seems like the migration worked fine!


looks like its also starting to delete the execution logs as specified in the env variables,

now lets see if it can reduce to a nice size where i can just send export the docker container and send it back to where it came from

2 Likes

thank you that makes proper sense, i will have to see if i can just paste the new database in place of that and hopefully everything will be back to normal?

In the docker-compose file its set to ~/.n8n which when run with another user might be an entirely different place? i have found another .n8n folder in but it seems way too small the db file is only like only 200kb. riit

okay so its the next day now, and it looks like the database is still huge (140gb)

is there anyway I can cut this down?? I have already put in the env variables for purging the execution logs and it seems to not be doing anything??

Ok so i implemented another env variable found here

which is DB_SQLITE_VACUUM_ON_STARTUP=true

and it seem to have worked, dropped the number of executions to 20,000 and it seemed to have lowered the disk usage by 400%

hmm but it didnt work with dock-compose restart, u really need to ‘down’ it and the ‘up’ it for it to take the new variables?

now im shrinking the whole server down with g parted, hopefully i can get the VM small enough to send back to the server machine and resize it again to as large as possible

1 Like

Hey @Josh-Ghazi,

Sometimes a container will need a stop and start rather than just a restart for things to kick in but it sounds like you have everything in place now. It might even be worth moving to something like Postgres in the future.

1 Like
    • Okay so ive reduced the image hdd size down to 60gb
    • reduced the vm hard drive side with the text editor down to 80 gb
    • exported the vm and uploaded to the origin server machine
    • resized the the hard drive to 150gb and the os drive to take up 150gb as well.

everything runs and the editor can now be accessed!

but now im running into a few problems which did not happen before the upgrade, while running my pdf downloading workflow about halfway in through the workflow the whole instance would freeze and the editor could not be accessed at all unless i restarted the container.

from the logs it looks like it froze doing http requests to download a pdf file, but the error econnect which is usually an ip block from the website. i get this often and change the vpn server and it usually fixes this, but it has never caused n8n to freeze before?

this is quite strange behaviour, if its not the workflow causing it, how can i check what the possible problem could be?

1 Like