Does HTTP GET: a binary file occupies size?

Hi. Simple question, I have a scenario that requests an HTTP to get the binary (image or pdf):

When I have the binary in the following step, I upload it to an S3 bucket or an FTP:


I guess that “HTTP” GET: binary occupies MB/GB in the disk?

I have seen an increment in the last week of disk usage, and these are the only two new scenarios:

I have made a snapshot just to confirm; normally, my snapshots are 5-6GB. This one was 12GB…

How I can clear binary files from my n8n? I really don’t need to store any binaries because I’m already storing them in S3/FTP.


Upp! Does anyone know how to remove the binary files saved to the n8n disk? Is there a docker command to do that? Can it be scheduled automatically?

I’m already on my 50% disk capacity in just a week.


Hey @yukyo,

If you check the binaryData folder in your container is there anything in it? Part of it could be the logging of workflows as well, Have you set up any database pruning options?

And please do not simply delete the issue template rather fill it. It is there for a reason, esp. the “Information on your n8n setup” section. That makes it easier to answer your question without having to ask more questions and so give you a meaningful and helpful answer.

Generally, can you reduce the data by configuring n8n so that it does not save successful executions. Executions and binary data get by default saved in the database (could be SQLite, which would fill up your local disk, or external database, which would fill up that disk) unless you have n8n configured to save binary data to the filesystem instead via N8N_DEFAULT_BINARY_DATA_MODE=filesystem.

If you want to save successful executions but just for a limited time and then prune automatically you can find a guide here:

Hi @Jon . Where I can find the binaryData folder? I’m using a Digitalocean droplet to host the n8n docker running in a Debian machine.


I’m already using a PRUNE automatically every 7 days:

Binary files will get deleted, too when the executions get deleted by the pruning task?


Hey @yukyo,

The folder would be next to the database file in the volume you create, But I don’t think that option is enabled by default so they would stay in memory or that is my understanding anyway.

Have you done a search on the OS to find the largest folders that could show you where to look.

If filesystem mode did not get enabled, it is saved it together with the execution in the DB. So it will be deleted at the same time.

If filesystem mode is enabled the same should happen. Sadly was there a bug which got fixed in the latest version (0.205.0). Here the GitHub PR.

1 Like

I still can’t figure out why I can’t delete the binary files… I did a manual pruning and journal vacuum:

Also, I have lowered the execution from 7 to 3 days, and now my execution went from 60k to 26k:


But I still have +50% disk used I went from 15% to 50% in one week.

Snapshot images went from 6GB to 20GB:


Helpp please!

There are two folders: “meta” and “persistMeta”


But checking with sudo du -sh it says the folder size is 12KB so it’s not that folder…

These are the top20 biggest folders:

Any thoughts?

Hey @yukyo,

That screenshot tells us what is going on, The size is likely being taken up by the data being stored in the database. If you check your workflow settings you can set them to not log for every run and only log when events have failed. If you do need to log for everything then the server would need to be specced to deal with that.

It would also be worth setting the vacuum option which may help to free up some of the space once the execution data is deleted.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.