I am building a RAG pipeline to ingest and embed large oncology guidelines using Qdrant. Since I could not ingest all of the files at the same time because of n8n’s memory limit, I first divided these files into 38 batches and stored them in my Google Drive. I then looped over each of them before chunking and embedding them.
However, my problem is that my workflow keeps crashing unexpectedly. In the latest execution, the workflow went on for almost 10 minutes during which I think a certain number of chunks were indeed embedded as I could see the field Points (approx) getting updated on Qdrant. However, the workflow crashed after that. Is this because of the time out limit from n8n’s side since I am using n8n cloud?
Is there something you can recommend me to do instead?
Please share your workflow
Share the output returned by the last node
Information on your n8n setup
- n8n version: 1.121.3
- Database (default: SQLite): no database
- n8n EXECUTIONS_PROCESS setting (default: own, main):
- Running n8n via (Docker, npm, n8n cloud, desktop app): n8n cloud
- Operating system: MacOS Tahoe 26.1