Improve operations / deployment story

Currently there are some very rough edges around how n8n works with multiple stake holders:

  • the import/export command use an API, but you are required to set up a local instance identically configured that can access the production database, in order to import the new workflows
  • the export command puts all the code / contents of the nodes as a long JSON string; so for larged bits of JS/logic, it’s a fools’ errand to try to upgrade / change the node specification
  • when using docker-desktop and local volume mounts, the sqlite database becomes corrupt when changing branches
    • also, it “loses the file descriptor” such that updates inside the container aren’t reflected outside the container
    • this wouldn’t be a big problem if I could just export the contents from AN API, but you require me to use the actual database file in this case (see points above)
  • there’s no leader election in n8n, so you corrupt the state of whether certain workflows are enabled when there are overlapping n8n instances going towards the same database (because for some reason the “active” flag is changed when n8n is stopped)
  • n8n is configured to use a $HOME folder with no method for overriding this — for config
  • n8n always takes the encryption password from this home folder, instead of possibly just being stateless and reading an env var (12 factor app style)

If you want some guidance on how to make n8n nice to deploy, look no further than Hasura.

Thanks a lot for taking the time. Here are my comments:

  • the import/export command use an API, but you are required to set up a local instance identically configured that can access the production database, in order to import the new workflows
    → It does not have to be 100% identical configured. It “just” has to have the same database configuration (that n8n knows where to read/write data from) and the same encryption password (that it can decrypt the credentials if required). You can also use the internal API to backup your workflows. There are multiple example workflows on n8n.io that do that (search for “backup”). You should check them out. All that said agree to 100% that at some point we have to improve that and make it easier. Is however sadly all about prioritization and priority. Hope we will get to it soon.
  • the export command puts all the code / contents of the nodes as a long JSON string; so for larged bits of JS/logic, it’s a fools’ errand to try to upgrade / change the node specification
    → Not sure I understand. You say it is hard to change the code of a Function-Node after it got exported? That would be true, and is honestly nothing we ever tried to optimize for, as n8n got designed that such changes get made via the UI.
  • when using docker-desktop and local volume mounts, the sqlite database becomes corrupt when changing branches
    → I do that actually probably 10 times a day and did never happen to me. So maybe I am misunderstanding. In the whole life of n8n that should have actually just happened once, now with the credential ID changes. As that branch will run a DB migration that is not backward compatible. But also there did we add a special command to our CLI to make it simple to roll back if required.
    • also, it “loses the file descriptor” such that updates inside the container aren’t reflected outside the container
      → You mean that Docker has its own file-system? And I would see that more as a feature rather than a bug. If you do not like that you can anytime install n8n via npm or mount whatever folder you want into your docker container.
    • this wouldn’t be a big problem if I could just export the contents from AN API, but you require me to use the actual database file in this case (see points above)
      → You can use whatever database you want, like for example Postgres. Then you can also use any such tool to backup your data. Also check the comment above about example workflows for n8n backups with n8n.
  • there’s no leader election in n8n, so you corrupt the state of whether certain workflows are enabled when there are overlapping n8n instances going towards the same database (because for some reason the “active” flag is changed when n8n is stopped)
    → Yes, that is true. That is currently not possible and that is also not how n8n gets scaled. There is special documentation about how to do that here. To change the behavior that you described regarding deregistering webhooks, you can set the environment variable N8N_SKIP_WEBHOOK_DEREGISTRATION_SHUTDOWN=true to avoid that.
  • n8n is configured to use a $HOME folder with no method for overriding this — for config
    → That is not correct. You can overwrite it anytime by setting N8N_USER_FOLDER
  • n8n always takes the encryption password from this home folder, instead of possibly just being stateless and reading an env var (12 factor app style)
    → That is also not correct. You can overwrite it anytime by setting N8N_ENCRYPTION_KEY

If you want some guidance on how to make n8n nice to deploy, look no further than Hasura.
→ Thanks, we will check it out

All the above-mentioned environment variables are documented here.

Generally agree to 100% on the topic, there is a lot we can improve and we want to improve.

2 Likes

Hey Jan,

import/export db

It’s not “just” the same database. To start with, the account credentials to the production database should never touch a developer laptop, but even allowing for that, the developer in question would still have to run two terminals, one with a SSH bastion host with port forwarding and another to push the workflows. It’s just not a good development story (or security story).

exported code readability

No, not edit, but I’m saying it’s hard to code-review the code of a Function-Node after it’s been exported. I think you should optimise for DX. Right now, it’s 4000 characters horisontal scrolling with inline newlines sprinkled throughout.

corrupt SQLite

Just 2 minutes ago as I was writing this, it happened again:

“n8n crashed locally, and all my changes disappeared”


but this happens on a daily basis every time you change branches (since the .sqlite file is source controlled (because of how bad the API for import/export is) to allow for a “docker-compose up is all you have to do” style workflow)

You mean that Docker has its own file-system?

No I mean that docker allows volume mounts that lets us synchronise the sqlite file back to source control (see above)

Then you can also use any such tool to backup your data.

This is not completely true; the exact workflow definitions you get out is different depending on if you export a workflow JSON file or if you restore it from a database backup, and nodes move about when you do repeated export/import. This means the logic for merging between branches and developers becomes even more complicated. Again, due to the bad API for exporting/importing.

You can overwrite it anytime by setting N8N_USER_FOLDER

No, this didn’t work when I tried it; took a longer than I would have wanted to realise that this env var didn’t work.

Encryption key

Ok, great, that I didn’t realise. That’s at least one takeaway from this thread for me :slight_smile:

Is anything of these problems planned to be fixed at some point :)?

Especially nice would be the ability to add TypeScript/JavaScript files outside the super-long JSON lines that the export provides and having a proper “upgrade workflow” API aka “import and overwrite” towards an API (integration by DB is a bad idea)