Local qdrant doc upload error

Hi

I am testing the use of the Qdrant vector store locally, this has worked for a number of smaller documents ( aprox 5000 kb sized, however when I try to add a pdf ( 20,000 kb) the following error message is - " Please execute the whole workflow, rather than just the node. (Existing execution data is too large. "

I have the Default Data loader is set to pdf and the option to split pages is set to On.
Tried different settings for the token / text splitter ie 512 and 1000 using the nomic text embeeding model.

Reading other posts that might have a similar issue I have amended the Docker yml to incude :
environment:
- N8N_PAYLOAD_SIZE_MAX=1024 # Set to 256MB, adjust as needed
- N8N_DEFAULT_BINARY_DATA_MODE=filesystem

Are there any other methods that can be used to pass large files into the Vector store?


## Information on your n8n setup
- **n8n version - latest :**
- **Database (default: Nil ):**
- **n8n EXECUTIONS_PROCESS setting (default: own, main):**
- **Running n8n via (Docker, 
- Win 11

1- Run the entire workflow, not just the node.
Always activate the entire trigger (e.g., Manual Trigger) and execute “Run Workflow,” not “Execute node.” This frees up memory between nodes and avoids excessive accumulation of running data.

2- Process the PDF in batches.
In large workflows, it is vital to divide the work into parts:
Use SplitInBatches to process pages or chunks one by one.
Perform batch uploads/insertions in Qdrant.
This reduces the load on memory, the database, and the intermediate payload.

3- Check the actual limit in n8n.
Confirm that N8N_PAYLOAD_SIZE_MAX=1024 is actually applied (see logs upon startup). Also, set EXECUTIONS_PROCESS to main to free up memory between nodes.

4- Use N8N_DEFAULT_BINARY_DATA_MODE=filesystem.
This stores temporary PDFs on disk instead of memory, reducing RAM consumption. Make sure the cleanup handlers are running and that there is enough disk space available.