Ok - one of the workflows just ran and the cleanup happened !!
So my understanding is that
trigger as soon as a workflow runs
runs only when docker is restarted and reduces the filesize of the actual sqlite db/file
dont both need to be there.
Looks like the PRUNE directive cleaned up all executions from beyond Today so it ignores the MAX_AGE directive and just Prunes based on some optimization algo.
I dont think the MAX AGE is working in isolation at all.
Just checked on Nov 22 and I still have entries from Nov 10 with the MAX AGE set at 5.
Seems like a #BUG to me
Now I added DATA PRUNE and restarted and as soon as I ran a workflow - the execution history entries dropped from 1100 to 12.
Couldnt reply due to restrictions of 3 consecutive replies hence editing my previous reply
The current env variable EXECUTIONS_DATA_MAX_AGE=5 just doent work.
What works is to set EXECUTIONS_DATA_PRUNE=true and have n8n pick some date or size to delete old executions. This gives you no control over what to delete.
I made the following workflow to delete executions that have no errors (you can configure that to select all) that are older than x days (set in the set node).
This is a more flexible way of setting EXECUTIONS_DATA_MAX_AGE=X
Set this on a daily schedule to keep n8n less cluttered and working fast.
How can this be added to the prebuilt template library?