Does the vector database retriever always take the prompt from the Q&A LLM node as basis for the vector search or can I define a custom vector query and insert the found documents via a dynamic variable?
In my case I want to query helpful and relevant document chunks from my vector database and append these to a predefined LLM prompt.
However, the LLM prompt itself has nothing to do with the actual vector search and I would love this to be two seperate processes. I am not sure whether that is possible on n8n.
I appreciate any ideas, tips & tricks! Thank you in advance!