How do I get an LLM to ask clarifying questions if the user doesn't supply enough information for a useful answer?

I have a RAG pipeline made with QA chain, but it has no memory. I want to ask the user to provide clarifying information on the question he asked, if QA chain did not find the answer. But since this block has no memory, the context will be lost.
How can I get around this limitation? An option if I create an AI agent and its tool will be QA chain?

Hey @DanielD ,
Yes, using an AI Agent, connected to your vector store and a memory would be the way to go for this. It is a bit more work but will also give you a lot more flexibility.
Hope this helps!

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.