I’m deploying n8n across a docker swarm and am mounting
/root/.n8n to shared storage. I see ~1TB/Day in transactions for this and am wondering if there’s a better way.
What, at a minimum, would n8n need to share for a container on one server to start seamlessly on another server. Is there a better way then mounting the sqlight db? Perhaps just share the workflows and stored secrets config across? I’m assuming that the object transactions I see this traffic on is workflow execution and result items. I don’t think I necessarily need this on every node if the container starts up on another in swarm, but the workflows themselves are important.
To expand on this question – I have a workflow running on a schedule that has over time gradually taken more and more bandwidth and now is using this much data. When it started it was using a small fraction of this. I’m assuming it has something to do with execution history because of this so if there’s an easy way to prune the DB or limit the storage size of objects in this respect then that would also work perfectly.