We are testing a RAG system to index local files, as some of the document indexing (call to LLM) take some time we are having issues with timeouts.
As I understand the local file trigger executes once per file changed/created/deleted and reports one file, so splitting and looping does not work, which will likely be the ideal scenario for us if at all possible. Even so I still see many workflows that use a loop when using the Local Files Trigger, which I am uncertain why.
I also understand that I can not limit the max number of executions of an specific workflow either.
What we would need is either that the workflow is executed only once per file o a way to queue them so they get executed in order only once the previous has finished, with might require something like redis and a queue?
The idea is that if suddenly XX files change, even if the process takes longer, only one (or x) workflows get executed simultaneously.
Any help would be appreciated.
## Information on your n8n setup
- **n8n version:**
1.92.2 (docker)
- **Database (default: SQLite):**
postgres