HOW TO: Migrate from SQLite to Postgres

Thought I might share this for whoever needs it, as there is a major lack of info around to achieve this without having to set everything back up (other than workflow and credentials)

This is especially relevant if you have more than 1 user, so you can keep the workflows and credentials that the users had

This is just going to be a quick guide.

  1. Follow the CLI commands to export your workflow and Creds
    CLI commands | n8n Docs

  2. Get yourself an SQLite browser like DB Browser for SQLite.
    Open your SQlite file with this.

  3. Export the tables to csv
    Namely
    -credentials_entity
    -folder
    -project
    -project_relation
    -shared_credentials
    -shared_workflow
    -user
    -webhook_entity
    -workflow_entity

You can also get your workflow histories etc if you like as well,

  1. Reconfigure your N8n to use postgres (supabase is a great option here, for config details click Connect in the top of the database, select connection string and select the type as “PSQL” to get the specifics to use it with N8N.)
    DB_TYPE=postgresdb
    DB_POSTGRESDB_DATABASE=postgres
    DB_POSTGRESDB_HOST=aws-0-eu-centr.supabase.com
    DB_POSTGRESDB_PORT=6543
    DB_POSTGRESDB_USER=postgres.ydw
    DB_POSTGRESDB_PASSWORD=Z7a#W

  2. Start up n8n and let it create all the database tables

  3. Import the workflows and credentials via the CLI commands

  4. import the csv files to the relevant tables in postgres. You need to import project folder before folders

  5. NOTE: You will need to delete the imported shared_workflow and shared_Credentials

  6. Delete the default user that is created in users

  7. in N8N settings set userManagement.isinstanceownersetup = true

  8. Restart the docker and you should be able to log in with your original user/pass and have all your workflows

  9. Enter your license key

Hope this helps someone and saves them the few hours I lost today after having my sqlite crash

7 Likes

Hi, you forgot to remove your dB credentials

@Shadoxity

Reg,
J.

Thanks for the note, those are just sample placeholders though as I figure supabase user may be a bit misleading to someone that isnt familiar with it

1 Like

Thanks for the hints.

I wasn’t able to export to csv and import easily because I kept getting weird errors with the software I was using.

I found one called DBeaver.

I simply connected to the sqlite database and the postgres and exported from one into the other and it worked.

I had to figure out order though because of the relationships and stuff.

But it seems to work.

Thanks.

Ok. This is great.

Now.. how do you persist community nodes?

What Do you mean? It sounds like you are not using a volume for your docker so when it restarts it doesn’t keep anything you gave installed like community nodes.

So fix that up and you should be good

1 Like

Thanks for this. I’m facing the same relationship challenges you speak of. Would you be able to post the order in which tables must be migrated to work around this? Thanks!

I don’t remember as I didn’t note it down.

But I believe if you export/import and it gives you an error, you just go to the next and then come back.

But I don’t have notes on that since its been so long.

1 Like

I’ll be rocking this process within the next week or so and can let you know what I come up with, if you’re still wondering.

That said, if you got the order figured out already and are willing to share I’d be elated to hear

Much light your way,

Auggy

1 Like

Just jumping in with my own experience migrating from SQLite to Postgres and ultimately what I found to be the easiest method is to spin up a new Docker stack with n8n+Postgres, then connect to both the old SQLite database and the new Postgres database using DBeaver (you’ll have to figure that part out on your own depending on if you’re using Supabase, a Postgres Docker container, etc - for me it was as simple as temporarily opening the Postgres ports up on my host and assigning the Postgres container ports in my docker-compose.)
With both databases open, you can use DBeaver to export directly from SQLite to Postgres. Trying to use the CLI was nothing but a headache. You can migrate everything over right in DBeaver and once you figure out the right order it’s actually relatively painless. Here’s the order that worked for me:

  1. credentials_entity
  2. workflows_entity
  3. folder
  4. user
  5. project
  6. project_relation
  7. tag_entity
  8. workflows_tags
  9. webhook_entity
  10. shared_credentials
  11. shared_workflow

You can specify the export order in DBeaver so you can theoretically just migrate everything in one go. Once the core tables are migrated, delete the default user as mentioned in the original post here, then log in to n8n and make sure everything is as expected. Then you can go back to DBeaver and migrate over whatever else you decide you want (workflow history, statistics, etc).

If you’re using data tables, you’ll need to export the data_table and data_table_columtables as well as the data tables themselves (data_table_user_xxx). I found the easiest thing to do is to log in to the new n8n instance and create a few dummy data tables, then edit those in DBeaver to match your needed table names/structure so that you can export straight to those tables. If you use DBeaver to create those tables, you’ll need to be careful with permissions otherwise the tables could end up being unusable.

Don’t forget the last step mentioned above - re-enter your license key! I spent 20 minutes banging my head against the wall wondering why my folders/projects weren’t showing up only to realize I hadn’t put in my license key so the feature wasn’t activated :person_facepalming:

Hope this helps someone!

3 Likes

I ran into the same challenge and ended up writing a script that fully migrates SQLite → Postgres, including credentials and the new Data Tables. Workflow execution history isn’t included by default, but it can easily be added by extending the table list.

Here it is: Migrate n8n from SQLite to PostgreSQL ¡ GitHub

2 Likes

There’s a new n8n cli feature n8n export:entities that should remove the need for brittle DB specific solutions!

# n8n export:entities --help
USAGE
$ n8n export:entities

FLAGS
-h, --help                           Show CLI help
–outputDir                          Output directory path
–includeExecutionHistoryDataTables  Include execution history data tables, these are excluded by default as they can be very large \[default: false\]
–keyFile                     Optional path to a file containing a custom encryption key

DESCRIPTION
Export database entities to JSON files

EXAMPLES
$ n8n export:entities
$ n8n export:entities --outputDir=./exports
$ n8n export:entities --outputDir=/path/to/backup
$ n8n export:entities --includeExecutionHistoryDataTables=true
$ n8n export:entities --keyFile=/path/to/key.txt
$ n8n export:entities --outputDir=./exports --keyFile=/path/to/key.txt

1 Like

Hello everyone, I get the following error when composing again docker after modifying for working with postgres. I followed order for exporting tables same as @Josh did;

n8n-1 | Migration “AddWebhookId1611144599516” failed, error: column “webhookId” of relation “webhook_entity” already exists
n8n-1 | There was an error running database migrations
n8n-1 | column “webhookId” of relation “webhook_entity” already exists
n8n-1 exited with code 1

Do you have any clue what can be wrong?

Thanks for your support!

Sounds like the migration table has a line wanting to add webhookId to the table.

If you started n8n before moving all the db stuff over, it probably put the standard setup stuff in there to create the relevant tables.

If you dont have any webhooks you can delete webhookId from taht table, or you can look in the migrations table and remove that specific migration so it doesnt try to add webhookId

1 Like

I think I’ve been able to solve the issue on migration. Now I’m able to start the n8n instance and as soon as it has to render data the log shows lots of this error:

ValidationError: The ‘X-Forwarded-For’ header is set but the Express ‘trust proxy’ setting is false (default). This could indicate a misconfiguration which would prevent express-rate-limit from accurately identifying users. See Page Redirection for more information.

Any help on this will be very welcome :wink:

@David_A_T.C This is a separate topic. You can read about it here.

TL;DR: Just set `N8N_PROXY_HOPS=1` and you should be fine.

I’m sharing my 10+ hours of experience here for anyone else who stumbles upon this migration challenge:

Option 1.: Using the official export/import CLI commands:

The official documentation points to export/import CLI commands.

Unfortunately, these were not present in my older version of n8n. If you run the command and get export:entities is not a valid command, your n8n version needs to be updated.

:warning: WARNING: By updating the n8n itself, stopping & redeploying containers you risk losing data, that are not written to the disk. The sqlite works using journaling, Write ahead logging (WAL) and therefore after spinning up the container, you may lose your data (Speaking of experience here).

On top of that, my SQLite schema was corrupted. When importing with n8n import:entities, I ran into a duplicate key value violates unique constraint error. It seems the n8n import script was either importing data incorrectly or had an improperly configured compound index.

Option 2.: Migrate using sequel

While searching, I found this article on using Sequel and gave it a shot. I spun up a new instance and tried the command:

sequel -C sqlite://<sqlite_filename> postgresql://<user>:<pass>@<host>:<port>/<dbname>

This failed with a schema creation error: Sequel::database error pg:datatypemismatch foreign key constraint cannot be implemented.

This seemed specific to my setup, but after inspecting the database manually with sqlite3, I discovered a real data type mismatch.

CREATE TABLE "workflows_tags" (
	"workflowId"	varchar (36) NOT NULL,
	"tagId"    integer NOT NULL, -- <--- Note: This is an integer
	PRIMARY KEY("workflowId","tagId"),
	CONSTRAINT "FK_workflows_tags_tag_entity" FOREIGN KEY("tagId") REFERENCES "tag_entity"("id") ON DELETE CASCADE ON UPDATE NO ACTION,
	CONSTRAINT "FK_workflows_tags_workflow_entity" FOREIGN KEY("workflowId") REFERENCES "workflow_entity"("id") ON DELETE CASCADE ON UPDATE NO ACTION
);

but the tag_entity table schema:

CREATE TABLE "tag_entity" (
	"id"    varchar (36) NOT NULL, -- <--- But this is a varchar
	"name"	varchar (24) NOT NULL,
	"createdAt"	datetime(3) NOT NULL DEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')),
	"updatedAt"	datetime(3) NOT NULL DEFAULT (STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')),
	PRIMARY KEY("id")
);

So, the database schema was fundamentally incorrect. I’m still not sure how this happened.

My next approach was to let n8n create the schema first. I configured n8n to connect to a fresh PostgreSQL database by setting the DB_TYPE, DB_POSTGRESDB_DATABASE, etc., environment variables. On startup, n8n correctly ran its migrations and created all the necessary tables.
Unfortunately, the sequel -C command always tries to create the schema itself and cannot be used just to import data. So, this was another dead end.

Option 3.: Using @hagen 's script

I tried using this script, but unfortunately it is probably not future-proof. There are hardcoded table names and these tend to change over time. There was a data column names mismatch. I played with the shell script, was adding / removing table names (e.g. data_store vs. data_table, workflow_dependency) but didn’t manage to succeed.

Option 4: Manual Export & Import (The Tedious Last Resort)

This was the last resort: exporting each SQLite table to CSV and then importing them one by one into PostgreSQL.

After stubling on this piece of code, I tried exporting the tables to CSV:

TS=`sqlite3 ./database.sqlite "SELECT tbl_name FROM sqlite_master WHERE type='table' and tbl_name not like 'sqlite_%';"`

# exports each table to csv
for T in $TS; do

  sqlite3 ./database.sqlite <<!
  .headers on
  .mode csv
  .output $T.csv
  select * from $T;
  !

done

This left me with about 40+ .csv files.

In the meantime, I booted the n8n instance connected to the empty PostgreSQL server (as in Option 2) and let it run its migrations to create the correct schema.

Then came the tedious part. For each table, I used psql’s \copy command:

\copy <table_name> FROM '<table_name.csv>' DELIMITER ',' CSV HEADER

For example:

\copy user FROM 'user.csv' DELIMITER ',' CSV HEADER

This didn’t always work smoothly. In some tables, the column order in the SQLite export did not match the PostgreSQL schema. This caused type mismatch errors, like trying to import a string into a boolean column.

To fix this, I had to inspect the CSV header (head -1 <file>.csv) and manually specify the column order in the \copy command:

\copy workflow_entity(id,"workflowId","finished","mode","retryOf","retrySuccessId","startedAt","stoppedAt","waitTill","status","deletedAt","createdAt") FROM 'workflow_entity.csv' DELIMITER ',' CSV HEADER 

By repeating this process for every table, I was finally able to import all my data into the new database.

Sidenotes

Disable instance owner setup

After the import, when you start your n8n instance, you will be prompted to create a new owner. To bypass this and log in with your migrated users, run the following SQL command, as mentioned by @Shadoxity:

UPDATE settings SET value = true WHERE key = 'userManagement.isInstanceOwnerSetUp';

After the next reboot, you should be able to log in with your original user credentials.

Truncate the database after every failed import

Each time an import attempt failed, I had to wipe the target database to start fresh. This ensures you don’t have partial or corrupted data.

\connect <N8N_DB_NAME>;  
DROP SCHEMA public CASCADE;
CREATE SCHEMA public;
GRANT ALL ON SCHEMA public TO <N8N_DB_USER>;

Conclusion

I’m aware that this is not the most ideal approach, and I hope for most users, a simple export/import will work. For those unlucky ones, I wanted to show that there is a way to retrieve your data without having to set everything up again from scratch.

I hope someone can benefit from this.

Congrats, you made it to the end! :tada:

Edit: I was editing this text multiple times, because cloudflare wouldn’t let me post here (It thought the SQL queries are injection). And I had to bi-sect and find which part of the text causes this. Apparently the varchar keyword.