After another database crash due to memory filling with binaryData, I am looking for a solution to automatically delete data after running scripts (ot binaryData limit in GB). Below I will attach my configuration.
What is the error message (if any)?
database is crashed due overload
I deleted files manually via linux console (root user)
my data is in n8n_n8n-data
//first find all volumes, bigger is with binaryData
df -h
//okay i find the folder /dev/sda1 is biggest
//create a folder for attach volume
sudo mkdir /mnt/sda1
//mount this folder
sudo mount /dev/sda1 /mnt/sda1
//enter to /mnt/sda1
cd /mnt/sda1
//list folders which big and contains much in GB
du / | sort -nr | head -10
//we see big folder /var/lib/docker/volumes/n8n_n8n-data/_data/binaryData in my volume n8n_n8n-data
//then enter to this folder
cd /var/lib/docker/volumes/n8n_n8n-data/_data/binaryData
//okay we in folder, let’s check out files with list command
ls
//yes we have metadata files, we need to delete all files inside this folder, except folders inside
find . -mindepth 1 -type f -delete
//viola - we deleted all not needed binaryData files!
Next i want to limit binaryData folder or clean every week to avoid overload.
my max age is 31 day because i want to see all history of important events executions, but i don’t need any files after scenario end
important environment info:
|DB_SQLITE_VACUUM_ON_STARTUP|TRUE|
|EXECUTIONS_DATA_MAX_AGE|744|
|EXECUTIONS_DATA_PRUNE|true|
EXECUTIONS_PROCESS main
N8N_DEFAULT_BINARY_DATA_MODE filesystem
NODE_ENV production
Information on your n8n setup
**n8n version:latest
**Database (default: SQLite):default
**Running n8n via (Docker, npm, n8n cloud, desktop app):docker
The binary data deletion is tied to the corresponding executions from my understanding (meaning the name of the N8N_PERSISTED_BINARY_DATA_TTLenvironment variable is a bit misleading here and only specifies how often n8n checks for binary data to be deleted).
So your best bet if you want to keep executions for 31 days but binary data only for 7 days would be to simply set up a workflow with a Schedule Trigger running once a week and then calling your find commands through n8n’s Execute Command node.
I know how to make a shedule trigger, but how to make such an action via command node ? Is it the same principle as I described above? So if I know where the folder is, I execute the command as above to execute the delete and everything will work without going to the console?
This is a workflow that i’ve been using to immediately delete binary files after an execution:
Basically, if there’s no requirement for the binary files to exist beyond an execution’s life, i trigger the above workflow at the end, passing it the execution id.
Is it the same principle as I described above? So if I know where the folder is, I execute the command as above to execute the delete and everything will work without going to the console?
Pretty much, though the paths will look slightly different from within your docker container.
Assuming you’re using a standard docker image the folder seen by the container would be /home/node/.n8n/binaryData/.
My solution: first if Binarydata files exist run command node with this command to search top 10 folders
du / | sort -nr | head -10
copy path of the folder in results of command node below
My example is
//enter to folder
cd /home/node/.n8n/binaryData
//list files
ls
//if files and metadata exist then next addition to delete all files in folder
find . -mindepth 1 -type f -delete
//check and sure to files are deleted with second list files command
ls