Query regarding binaryData folder disk usage

Hi all.
While trying to debug extensive disk usage of my docker-based n8n instance, one of the things i came across is a large folder for .n8n/binaryData/. It’s currently using 8.6G, which is unexpected for my usage (i may be wrong).

  • I couldn’t find any method of diagnosing the actual usage of these files, like, where were they generated or are being used?
  • Also, didn’t see any option to prune unused files in this folder.

Kindly provide some insight into this.

Thanks.

Information on your n8n setup

  • n8n version: 0.176.0
  • Database you’re using (default: SQLite): Postgres
  • Running n8n with the execution process [own(default), main]: Main
  • Running n8n via [Docker, npm, n8n.cloud, desktop app]: Docker

That folder contains all the binary data if you have the environment variable N8N_DEFAULT_BINARY_DATA_MODE set to filesystem.

There should normally not be a need to prune them as they should automatically be deleted with the execution they belong to. The execution-ID should be part of the file-name. That should allow you check if the execution still exists.

1 Like

Thanks, will check it out.
Could you also tell, how to ‘list’ and ‘get’ those files, from within a workflow?

That will happen automatically. If you have a node that contains binary data (like an image), you open the node and then change the view to “Binary” then it will be displayed (it will load one of the files of that folder).

2 Likes

Hi @jan .

  1. So, i wanted to dig in a bit more into this and went through the files, directly on the server.

I noticed, that the binaryData folder is containing files for executions which have long been pruned.
The files are as old as 7th March (probably the day i switched on binary=filesystem), even though execution data pruning is set to 14 days.
And the executions are regularly getting pruned.
But, apparently, they are leaving behind the corresponding binaryData files.

  1. I also wanted to explore the option of deleting the files from within a workflow.
    Most of these files are in a use-case where, after generation, they are uploaded somewhere and their local copies are no longer needed. Hence, it would be better if there was a way to delete them within the workflow execution (if desired), without deleting the execution data itself.

I also have this issue running n8n with SQLite via Docker on Raspberry Pi 4B.

My instance is scheduled to regularly prune executions, but binary data relating to these executions are not being deleted. With my current instance, .n8n folder has grown to 15GB over 97 days (since the last time I pruned manually).