Getting "Unexpected end of JSON input" on list of workflows and executions after udpate

Describe the issue/error/question

Hello, today after update to latest version 0.212.0, I am not able to run my n8n instance? Actually, the process runs correctly but I am not able to view any workflows nor executions, getting this error: Unexpected end of JSON input, screenshot here Dropbox Capture.
No workflow is running.

Information on your n8n setup

running on own digitalocean instance with pm2, no error in console either, here is output Dropbox Capture

Also, I guess I should mention I have been doing upgrade where migrations were running, however during the migration the disk run out of space (not sure what the migrations have done, but it made my sqlite db huge (from 8gb to over 15 before running out of space)). so i had to increase the disk size :confused: maybe the migrations have not finished? can i run it manually?

I am worried I lost all my workflows, which I have used for basically anything and spent countless time with their setup :smiley: :frowning:

Hopefully not,
Thanks a lot in advance

I’m getting a similar issue with a POST request to Real Debris.

It does what it’s supposed to do but also triggers the error workflow.

Hi @jsifalda, welcome to the community! I am sorry to hear you’re having trouble.

I am not super familiar with PM2 but SIGINT (shown on your screenshot) would usually be the signal sent when pressing Ctrl+C. Is it possible PM2 is shutting down the app for some reason?

Perhaps you can try another deployment method, just to be sure. You can for example point docker to the directory your PM2 instance of n8n is using, our documentation has an example command: Docker - n8n Documentation.

You can also test specific versions of n8n rather easily by specifying them as part of the command, for example docker run -it --rm --name n8n -p 5678:5678 -v ~/.n8n:/home/node/.n8n n8nio/n8n:0.213.0

This command uses the -v ~/.n8n:/home/node/.n8n which makes your local ~/.n8n directory available to the container. This should be the default data directory for n8n and would also include the default SQLite database.

@dovahkiin93 while you might be getting the same error message as @jsifalda, it seems is your issue is rather different and related to a specific HTTP Request. Perhaps you can open a new thread sharing a workflow using which your problem can be reproduced?

@jsifalda Can you please try setting the DB_SQLITE_VACUUM_ON_STARTUP env variable as defined here.

The issue here is that some kind of schema changes in sqlite require cloning over an entire table, and it’s likely that the executions_entity table on your instance is really big, and a migration in 0.212 had to copy that entire table to make schema changes. After the migration is done, the disk space is supposed to be released, but since the DB isn’t vacuumed, the disk space isn’t released back.

Because of this sqlite limitation, I highly recommend switching over to postgresql, if you can, to avoid issues like this in the future.

@MutedJam & @netroy thanks guys for the help. However, the issue was because of the migration, which from 8GB sqlite file wanted to make twice that, the disk run out of space… i was not able to continue migration after the disk resize, and the sqlfile was corrupted. I managed with some external tool to “fix” the file (just for db reader, not for n8n) and export workflows one by one (it was very long and boring task honestly :D), but once I done that, I was able to run complete new instance of n8n and manually import workflows one by one again… it made a mess with workflows ids, but most of the stuff is working it seems… vacuuming and cleaning executions is the good config for next time, but it was no help in my case with interrupted migration.