How to migrate from SQLite to PostgreSQL?

Good Morning everyone,

After getting my first custom node up and running, n8n (which I originally only set up to play around a bit) has been an unexpected success in my team and has already replaced many cron jobs. However, with that success the problems described for example in conversations like this one arose.

So I want to migrate to a PostgreSQL database as recommended in that thread. Now I wonder if there is a clean migration path from SQLite to PostgreSQL. Right now, I am running both n8n and PostgreSQL in their own docker containers (in which our users can store/retrieve data from their workflows should they wish to) and will probably just add a second PostgreSQL service to the docker-compose configuration.

Now my question is, is a migration possible without asking everyone to download their workflow and import it again once the new PostgreSQL database is up and running (and connected to n8n)? Which tables would I need to export from the SQLite database and import into PostgreSQL to make this happen (and do I need to convert data along the way)? I don’t care so much about the past execution data, getting workflows over alone is fine.



Really great to hear that you and your team enjoys n8n!

Sadly do we have currently nothing in place for an easy migration. I however just had a short look online and found this:

It looks like it should be possible to migrate the data without to much work.


Thanks so much Jan! I ran into a couple of conflicts when trying this on Friday (both when reading the SQLite dump file through psql and when using pgloader), but maybe my migration path wasn’t the ideal one. In order to keep service interruption to a minimum, I had set up a separate instance of n8n connected to my PostgreSQL database (using a copy of the config file with the encryption key) and this appears to have already written to the database. My thought is to simply replace the hostnames in the nginx reverse proxy config once the new (PostgreSQL-) n8n instance is filled with all workflows and credentials and then shut down the old (SQLite-) n8n instance.

I’ll give copying the relevant tables separately a try on Monday. In case this works and everything is usable I’ll make sure to share a brief description of what exactly I did in case anyone encounters a similar situation.

1 Like

Thanks a lot! That is very appreciated!

If we find some time in the future we really have to add some options to the cli that exports and imports all data to make a database migration easier.

1 Like

So turns out it wasn’t too hard in the end. Here’s what I did.

Scenario: You have two instances of n8n running, an old one using SQLite with a couple of active workflows and a new one using PostgreSQL without any data yet. You want to move over existing workflows and credentials without having your users re-create them or copy them over manually.

  1. Copy the config file with the encryptionKey from the .n8n directory of the old instance to the new one
  2. In the .n8n directory of the old instance, open database.sqlite with sqlite3: sqlite3 database.sqlite
  3. Export the tables credentials_entity and workflow_entity to CSV by running the following commands (Note that there is also webhook_entity which sounds relevant but was empty in my case. You might want to include this as well should you have data in there):
.headers on
.mode csv
.output credentials_entity.csv
SELECT * FROM credentials_entity;
.output workflow_entity.csv
SELECT * FROM workflow_entity;
  1. Now connect to the PostgreSQL database connected to your new n8n instance through psql: psql -h localhost -p 5432 -U n8nuser n8ndatabase (replace localhost with your actual hostname, 5432 with the database port, n8nuser with the database user and n8ndatabase with the database name)
  2. Import the CSV files created in the previous step like so and then reset the respective sequences for these tables so new credentials and workflows get proper IDs (again I have skipped webhook_entity here):
\copy credentials_entity from '/path/to/old/n8n/data/credentials_entity.csv' delimiter ',' csv header;
\copy workflow_entity from '/path/to/old/n8n/data/workflow_entity.csv' delimiter ',' csv header;
SELECT setval(pg_get_serial_sequence('credentials_entity', 'id'), MAX(id)) FROM credentials_entity;
SELECT setval(pg_get_serial_sequence('workflow_entity', 'id'), MAX(id)) FROM workflow_entity;
  1. Restart the new n8n instance to load the newly added workflows and credentials (not sure this step is necessary but it’s what I did)
  2. Now in the last step, redirect requests goinig to the old instance to the new one. In my case this was pretty straight forward as I had set up nginx as a reverse proxy. I therefore just needed to replace the port the old n8n instance was listening on with the port the new n8n instance was listening on in the respective server block (e.g. in my case replacing proxy_pass http://localhost:5678/; with proxy_pass http://localhost:6789/; etc. inside the n8n file of /etc/nginx/sites-available). However the exact steps here will depend on your setup, so this will vary. If you are using docker you could also just stop the old n8n instance and configure the new one to listen on the port of the old one going forward.

That’s it. After running through the above steps, n8n is now storing data in PostgreSQL instead of SQLite.


Really great @MutedJam! Thanks a lot for the write-up. I am sure will be very helpful for many users in the future!

1 Like

is this still correct, as when i export these files, they are all empty

You’d probably want to use the CLI these days: CLI commands - n8n Documentation

These commands are database-agnostic, so should make things a lot simpler.

1 Like