Describe the problem/error/question
Hi, I don’t know who can help me solve this problem.
How to insert a vector store retriever module between “chat message received” and “AI agent”?
When building an AI Agent flow, I call some MCP services, but before that, I need to perform some information augmentation similar to RAG (actually adding it to the context, not based on LLM output).
Is there a way to integrate the Vector Store Retriever before the AI Agent? (Or, is there a way to prevent the LLM module from outputting?) Thank you for your answer.
Information on your n8n setup
- n8n version:1.93.0
- Running n8n via (Docker, npm, n8n cloud, desktop app):docker