Local File Trigger, how to iterate or limit number of executions

We are testing a RAG system to index local files, as some of the document indexing (call to LLM) take some time we are having issues with timeouts.

As I understand the local file trigger executes once per file changed/created/deleted and reports one file, so splitting and looping does not work, which will likely be the ideal scenario for us if at all possible. Even so I still see many workflows that use a loop when using the Local Files Trigger, which I am uncertain why.

I also understand that I can not limit the max number of executions of an specific workflow either.

What we would need is either that the workflow is executed only once per file o a way to queue them so they get executed in order only once the previous has finished, with might require something like redis and a queue?

The idea is that if suddenly XX files change, even if the process takes longer, only one (or x) workflows get executed simultaneously.

Any help would be appreciated.



## Information on your n8n setup
- **n8n version:**
1.92.2 (docker)
- **Database (default: SQLite):**
postgres

hello @luison

The easiest way is to have a message broker like RabbitMQ and configure your trigger to save a new message once the file trigger is activated. Then you can configure the RabbitMQ trigger to process only one message simultaneously.

Thank you. I am just trying to avoid that as I find that far from “easy” considering a limited volume of tasks (few hundreds a day max). Having to maintain and configure a RabbitMQ server just to queue those few seems overwhelming.

We could consider a manual built queue over a spreadsheet or database we already have, but I was hoping that an easier solution was already developed.
There must be many triggers, including webhooks that should have a way to limit or queue their processes.