Where Does N8N Store Execution Data by Default? And for how long

Hey everyone,

I’m running a self-hosted N8N instance on Google Cloud with minimal specs (1GB RAM, 30GB storage). Two days ago, I checked my execution history and was still able to download a previously fetched document from a failed workflow execution.

My main questions:

Where is execution data stored by default? Is it in the database (non-volatile storage) or just kept in memory?

When is this data discarded? Does N8N automatically clean up execution data at some point, or does it persist indefinitely?

Would appreciate any insights on how this works by default!


## Share the output returned by the last node
<!-- If you need help with data transformations, please also share your expected output. -->

## Information on your n8n setup
- **n8n version:** 1.85
- **Database (default: SQLite):** SQLite
- **n8n EXECUTIONS_PROCESS setting (default: own, main):** own
- **Running n8n via (Docker, npm, n8n cloud, desktop app):** gcp
- **Operating system:** Windows 10

Hi, it is stored in the database. You have several execution tables.

The data is kept for fixed amount of time but can be modified through env variables

Also it is possible to adapt what is saved (failed/success/or all).

All the details can be found in the docs.

Reg,
J.

1 Like

Well I can set this to 1 hour so all the files from historic executions are deleted after an hour but what about deleting them instantly. You see I am looking for a way to run like a giant loop. Like 50MB presentation slide times 1000 leads would be 50 GB of workflow storage - is there any way to circumvent that? @jcuypers
Thanks for your answer by the way! Appreciate it

  • EXECUTIONS_DATA_SAVE_ON_ERROR=all
  • EXECUTIONS_DATA_SAVE_ON_SUCCESS=none
  • EXECUTIONS_DATA_SAVE_ON_PROGRESS=true EXECUTIONS_DATA_SAVE_MANUAL_EXECUTIONS=false

Even progress you could set to false if it would be an issue

These variable force N8N to not/selectively store

This is even before getting rid of them (purge) time

Please accept my answer as solution if you like it :pray:

1 Like

Thank you! Will accept it as solution for sure just one minor question …
First 2 are clear but
What are * EXECUTIONS_DATA_SAVE_ON_PROGRESS=true EXECUTIONS_DATA_SAVE_MANUAL_EXECUTIONS=false. Referring to exactly? On progress and manual executions - if I set both of them to false
@jcuypers

Hi, im not using it myself, but i have some clue.:

EXECUTIONS_DATA_SAVE_ON_PROGRESS (saving data while it is been processed / like actively process by a worker) think like intermediate steps. its not yet error-ed nor finished.

EXECUTIONS_DATA_SAVE_MANUAL_EXECUTIONS (this is the state when your workflow is not set to active, and you would push test workflow) - than do you want the data of this process be saved or not? (in order you to be able to troubleshoot in the executions tab.

reg,
J.

2 Likes

@jcuypers makes perfect sense. Would it in theory be possible to only add specific workflows to that that have those „extreme“ settings or not saving any working data?
And you said the workflow data is stored in the database instantly - meaning it’s instantly removed/altered from the RAM so from a low RAM vm instance‘s perspective, this is not a problem?

Something came tto mind, I didn’t come around to test all of this actually. There might be a difference in binary and the rest. In the coming days I will let you know.

Yes you can have individual setting per workflow (again, as I read it myself :slight_smile: )

Reg,
J.

1 Like

Well, some more info:

n8n executes binary data pruning as part of execution data pruning. Refer to Execution data | Enable data pruning for details.

another statement: by default, everything binary is kept in memory unless you set it to filesystem (which only works on non-queue mode)

So i guess there’s no other way then to fully try it and document it. Maybe someone who has already done this in practice can comment with a definitive answer

1 Like

Yes binary might be different to this. Any idea on how to do the change on a workflow level?

@jcuypers
Cause from what you wrote and what the docs say (which is not always clear) the “EXECUTIONS_DATA_SAVE_ON_PROGRESS” set to false is the most important one for me… yet while this hugely benefits the large loops, for some other workflows it would be better to have it stored for a couple of days.

Well, that would be up to you i guess. Can you please accept as answer?

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.