I’m running a self-hosted N8N instance on Google Cloud with minimal specs (1GB RAM, 30GB storage). Two days ago, I checked my execution history and was still able to download a previously fetched document from a failed workflow execution.
My main questions:
Where is execution data stored by default? Is it in the database (non-volatile storage) or just kept in memory?
When is this data discarded? Does N8N automatically clean up execution data at some point, or does it persist indefinitely?
Would appreciate any insights on how this works by default!
## Share the output returned by the last node
<!-- If you need help with data transformations, please also share your expected output. -->
## Information on your n8n setup
- **n8n version:** 1.85
- **Database (default: SQLite):** SQLite
- **n8n EXECUTIONS_PROCESS setting (default: own, main):** own
- **Running n8n via (Docker, npm, n8n cloud, desktop app):** gcp
- **Operating system:** Windows 10
Well I can set this to 1 hour so all the files from historic executions are deleted after an hour but what about deleting them instantly. You see I am looking for a way to run like a giant loop. Like 50MB presentation slide times 1000 leads would be 50 GB of workflow storage - is there any way to circumvent that? @jcuypers
Thanks for your answer by the way! Appreciate it
Thank you! Will accept it as solution for sure just one minor question …
First 2 are clear but
What are * EXECUTIONS_DATA_SAVE_ON_PROGRESS=true EXECUTIONS_DATA_SAVE_MANUAL_EXECUTIONS=false. Referring to exactly? On progress and manual executions - if I set both of them to false @jcuypers
Hi, im not using it myself, but i have some clue.:
EXECUTIONS_DATA_SAVE_ON_PROGRESS (saving data while it is been processed / like actively process by a worker) think like intermediate steps. its not yet error-ed nor finished.
EXECUTIONS_DATA_SAVE_MANUAL_EXECUTIONS (this is the state when your workflow is not set to active, and you would push test workflow) - than do you want the data of this process be saved or not? (in order you to be able to troubleshoot in the executions tab.
@jcuypers makes perfect sense. Would it in theory be possible to only add specific workflows to that that have those „extreme“ settings or not saving any working data?
And you said the workflow data is stored in the database instantly - meaning it’s instantly removed/altered from the RAM so from a low RAM vm instance‘s perspective, this is not a problem?
Something came tto mind, I didn’t come around to test all of this actually. There might be a difference in binary and the rest. In the coming days I will let you know.
Yes you can have individual setting per workflow (again, as I read it myself )
another statement: by default, everything binary is kept in memory unless you set it to filesystem (which only works on non-queue mode)
So i guess there’s no other way then to fully try it and document it. Maybe someone who has already done this in practice can comment with a definitive answer
@jcuypers
Cause from what you wrote and what the docs say (which is not always clear) the “EXECUTIONS_DATA_SAVE_ON_PROGRESS” set to false is the most important one for me… yet while this hugely benefits the large loops, for some other workflows it would be better to have it stored for a couple of days.