Hello everyone,
I’m trying to make a RAG (Retrieval-Augmented Generation) system work. It queries a PDF that has been vectorized in Qdrant. Apparently, the response returned from the vector store tool created by Ollama 3.1 locally is always perfect, but it seems the agent doesn’t read the output from the vector store and says it doesn’t have enough information.
The chat model I’m using is the same for both the vector store and the agent.
Can anyone help me understand why this is happening or suggest a solution?
Thank you in advance for your assistance.
vector store perfect answer
chat model error
Information on your n8n setup
- n8n version:
- 1.70.3
- Database (default: SQLite):
- default
- n8n EXECUTIONS_PROCESS setting (default: own, main):
- default (docker compose installation)
- Running n8n via (Docker, npm, n8n cloud, desktop app):
- docker
- Operating system:
- w11 24h2