Since I built our cluster on MySQL… and Postgresql is recommended, any chance you have a database export import method, or a maybe a format that would facilitate switching to Postgresql? Maybe just an ANSI-compatible SQL dump would work? I never thought of this before. The answer might be out there already, but figured I’d ask.
Something is not working as expected. I wanted to migrate from SQLite to PostgreSQL as well, so I jumping right in.
I’m running n8n in a docker container via docker-compose. So on my host I tried: docker exec -it n8n n8n export:workflow --backup --output=backup/
That results in:
Error exporting workflows. See log messages for details.
No workflows found with specified filters.
Sadly I’m unable to find the logs, at least I can’t find anything in the container logs.
I’ve also tried to login to the container via sh and execute the command directly in the container, but that doesn’t change anything:
/data # n8n export:workflow --backup --output=backup/
Error exporting workflows. See log messages for details.
No workflows found with specified filters.
Exporting the credentials also don’t work for me:
/data # n8n export:credentials --backup --output=backup/
Error exporting credentials. See log messages for details.
No credentials found with specified filters.
> docker exec -it -u node n8n n8n export:workflow --backup --output=backup/
Error exporting workflows. See log messages for details.
EACCES: permission denied, open 'backup/3.json'
I’ve tried to create the backup folder in /data by myself and I tried without (deleting the one I created before). The error message doesn’t change at all.
The user node has much less rights (reason why we use it in the container) and can so not write simply everywhere it wants, as root can. You have to make sure to not just choose the output location depending on where the user is allowed to write to, but also where you can access the data again on your host. Because they are two totally different filesystems.
The simplest location would be in the .n8n folder of the user, as that is by default mounted and if it would not be able to write to that, n8n would not function.
So you could use the following instead it should work:
> docker exec -it -u node n8n n8n export:workflow --backup --output=~/.n8n/backup/
Aborting execution as a filesystem error has been encountered while creating the output directory. See log messages for details.
FILESYSTEM ERROR
====================================
EACCES: permission denied, mkdir '~/.n8n/backup/'
Error: EACCES: permission denied, mkdir '~/.n8n/backup/'
at Object.mkdirSync (node:fs:1382:3)
at ExportWorkflowsCommand.run (/usr/local/lib/node_modules/n8n/dist/commands/export/workflow.js:67:34)
at ExportWorkflowsCommand._run (/usr/local/lib/node_modules/n8n/node_modules/@oclif/command/lib/command.js:43:31)
So I had a look at my docker-compose.yml and found the following volume mappings:
Hi all, I am really sorry for ruining everyone’s day but you might want to double-check the import is actually successful.
Importing workflows & credentials via the CLI will quietly fail for some of them when using Postgres as reported over here:
This came up in the past too and it was decided not fix it back then:
You might want to follow the original thread for further updates on this. Years ago I have manually imported my workflows into Postgres which worked fine. So I guess the old approach is still worth a try: How to migrate from SQLite to PostgreSQL? - #5 by MutedJam (obviously, there are way more tables these days, also make sure to copy the config file including the encryption key so the credentials can still be used).
@MutedJam You didn’t ruin my day. Your input was very appreciated.
When importing the workflows the first time, I’ve noticed that the CLI told me that 13 workflow had been imported. So I was happy. Later I opened my workflows on the web and I was very scared because I could only see 8! And that were my workflows from the n8n courses, not my productive that I created afterwards.
So I shut down the docker container, modified my environment to switch back to SQLite and re-started it. After deleting the first backup I created another one. Stopped the container, re-added PostgreSQL and restarted it. After the import I checked the web instantly and I could see all 13 workflow. So only thing that I noticed is that I think the ids have changed, but that doesn’t matter to me. And it looks like the tags gone wrong. Don’t know if the tags belong to the id or something else, because not only my production workflows (all with tags) had the correct tags, but also the course workflows had my production tags (and they don’t have them in SQLite). But removing a few tags manually didn’t hurt me that much, so I’m happy.
But I need to admit that the database migration process is not smooth at all. Specially when running in a docker-container.
I am so sorry for this. Tbh, for a total 13 workflows manually copying them might still be the most feasible option until this is fixed. It would definitely save you from having to work with different SQL dialects and tools at the same time.
No need to feel sorry about. This is an open source product. For most of the users, if not all, you’re working for free. You all do a tremendous job and you deliver an awesome product. I can’t state that often enough.
I’m not even sure that I don’t played some kind of role in my failure.
But surely I would assume that backup data and restoring it (even in different database systems) would preserve the existing ids. For me personally it simply doesn’t matter that they don’t match. But in the thread you posted I read that some people use this feature to keep two distances of n8n in sync (dev and prod for example), so for them it’s an issue for sure.