Docker Swarm Scaling

Hello,

I’m deploying n8n across a docker swarm and am mounting /root/.n8n to shared storage. I see ~1TB/Day in transactions for this and am wondering if there’s a better way.

What, at a minimum, would n8n need to share for a container on one server to start seamlessly on another server. Is there a better way then mounting the sqlight db? Perhaps just share the workflows and stored secrets config across? I’m assuming that the object transactions I see this traffic on is workflow execution and result items. I don’t think I necessarily need this on every node if the container starts up on another in swarm, but the workflows themselves are important.

To expand on this question – I have a workflow running on a schedule that has over time gradually taken more and more bandwidth and now is using this much data. When it started it was using a small fraction of this. I’m assuming it has something to do with execution history because of this so if there’s an easy way to prune the DB or limit the storage size of objects in this respect then that would also work perfectly.

There is a lot in here. So let’s first start with the 1TB/day.

Information: All lines starting with export means that are environment variables to be set.

With that amount of data there is probably no need to actually save all of it. So you can set up n8n to by default save only failed executions and not the successful ones as documented here.

Example:

export EXECUTIONS_DATA_SAVE_ON_ERROR=all
export EXECUTIONS_DATA_SAVE_ON_SUCCESS=none

Additionally should data be pruned automatically as documented here.

Example:

export EXECUTIONS_DATA_PRUNE=true
export EXECUTIONS_DATA_MAX_AGE=672
export EXECUTIONS_DATA_PRUNE_TIMEOUT=7200

About different databases. There is also a whole page about which ones are supported and what has to be set here.

That there is no need for the .n8n/config file you can set the encryption key via an environment variable. But before setting it, make sure that you get the current value from the file. If you set it wrong the credentials can not be decrypted and the workflows can not run anymore.

export N8N_ENCRYPTION_KEY=xxxxxxx

If you start with a new database you can set the encryption key to a random new value.

Here also documentation about exporting & importing credentials & workflows.

Hope that is helpful.

1 Like

This is perfect, although it appears I completely missed that in the docs. Thank you for outlining this and point me to the resource.

Glad to hear that it was helpful. Have fun!