When the Remove Duplicates node hits the history size limit, does it work like FIFO where the oldest entries get removed automatically to make space for new ones?
If not, is there any alternative instead of cleaning up everything??
Thank you for your reply. I have read the article, which mentions the following:
History Size: The number of items n8n stores to track duplicates across executions. The value of the Scope option determines whether this history size is specific to an individual Remove Duplicate node instance or shared with other instances in the workflow. By default, n8n stores 10,000 items.
In my case, the input items for the Remove Duplicate node are around 30–50 per daily process, and the same data may reoccur roughly every two months. Therefore, I set the history size limit to 1,000 for a shorter retention period. I expected the oldest entries to be automatically removed as new ones arrive once the limit is reached.
However, this does not seem to happen. After using it for a few weeks, I encounter an error indicating that the limit has been reached. Running the Clear Deduplication History operation wipes out the entire history, which is not ideal.
Given this, is there an alternative approach or best practice you could recommend for handling my use case?