I am using Qdrant and have several collections within my cluster. I want to retrieve data chunks from multiple collections and give them ALL at once to an LLM as context. Unfortunately, with the Question and Answer chain you can only retrieve documents from one collection and it does not return the chunks, only the LLM output.
Is there a way to only retrieve the data chunks?
I’d like to bypass the LLM involvement until later in my workflow.
It looks like your topic is missing some important information. Could you provide the following if applicable.
- n8n version:
- Database (default: SQLite):
- n8n EXECUTIONS_PROCESS setting (default: own, main):
- Running n8n via (Docker, npm, n8n cloud, desktop app):
- Operating system:
Hey @Anthony_Lee Welcome to the community!
One approach is to use the http node to query the Qdrant API. Note, you’ll have to generate the embedding vectors separately however.
Thank you. Yes, that would be one way to solve the issue. Unfortunately, it is a bit complex and beyond my skill level at the moment. This has been such a common issue I’ve run into. It seems every Vector Store solution forces you to communicate with an LLM, rather than just allow you to retrieve the data chunks.
Voiceflow has an API that includes the data chunks, so that has been a solution I’ve used. But it only lets you have two stores on the free account, and the next tier is $50 (which is a bit much imo).
Nevermind. Sonnet 3.5 came in clutch. I was confused by all the HTTP modules in the example (still curious what the logic was there) but I managed to use two API calls, simply OpenAI embed and Qdrant Point Search and viola.
This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.