I upgraded to the latest version of n8n last night and all my workflows disappeared. I have it hosted in a docker container and have persistent storage for it. I was able to pull the .sqlite file over to my personal PC and my workflows are there, but I don’t know how to get them out of there and/or reconnect that database.
Hi @si1en7 Welcome to the community, and I’m sorry this happened to you.
Can you take a look at this doc: CLI commands | n8n Docs and see if you’re able to import your workflows that you have in your old database into your upgraded n8n instance? That might help you fix this up!
are you running a custom docker image by any chance ?
Also, can you please tell us what version you were on before, and what version did you upgrade to ?
I am running the standard deploy listed on the docs sans traefik. I cannot remember the exact revision, but I was on 1.2.2 or 1.3.0. I updated to 1.4.0. I then updated to 1.4.1 with no change. database.sqlite is ignored.
Same exact thing is happening to me today. Before upgrading, I setup the user management and created an owner account (since I was prev. using basic auth). Then I did the upgrade. Once upgrade was complete, it asked me to setup the user account again for some reason… and then when it logged me in, no workflows could be found (and no saved credentials either).
I gave up. I don’t have time to fiddle around with the current deployment. I did write a script to open the database.sqlite file and create a json dump. Here is the python code to do so.
The code will open the sqlite db, gather all the workflows from the workflow_entity table and create a dictionary for each then dump the dictionary as a json string to a file with [the name of your workflow].json
From there you can import the file directly into the new n8n instance.
I went ahead a re-deployed n8n with postgres and will do regular backups to ensure this doesn’t happen again.
import json
import sqlite3
# Connect to the database
conn = sqlite3.connect("database.sqlite")
# Create a cursor
c = conn.cursor()
# Query the database
c.execute("SELECT * FROM workflow_entity")
for single in c.fetchall():
# begin building an dict object to hold the workflow data
workflow = {
"name": single[1],
"nodes": json.loads(single[3]),
"pinData": json.loads(single[9]),
"connections": json.loads(single[4]),
"active": False,
"settings": {},
"versionId": f"{single[10]}",
"id": f"{single[0]}",
"tags": [],
}
# convert the dict object to a json string
workflow_json = json.dumps(workflow)
# write the json string to a file
with open(f"{single[1]}.json", "w") as f:
f.write(workflow_json)
This looks very similar to my issue (Problem when upgrading the base image (with Python)). In my case, I am using a custom docker image to add Python to my n8n instance, but every time I build a new image, I end up with a completely “fresh” n8n instance. Everything is wiped out.
@Jon - what kind of details would you like to know? It is a basic install on a VPS using docker. We followed the install guide for docker-compose from the n8n docs (and enabled the non-root user access like the guide suggested).
@autom8 the exact config would be a handy starting point but also if you are using something like Portainer for your compose file which is known to do some odd things.