Building Conversational Agent to retrieve certain data

Describe the problem/error/question

I’m trying to build a custom service chatbot flow, and I want to retrieve the customer’s name and birthday through out the conversation.

If either info is missing, try to ask the user for it.
If both name and birthday are mentioned then just pass a json result of it (, not a dialog response to the user).

How do I achieve this? Thank you.

What is the error message (if any)?

Please share your workflow

Share the output returned by the last node

ChatUI Input:
Hi, I’d like to ask a question about my account

ChatUI Respond:
[No response. Make sure the last executed node outputs the content to display here]

Expecting Respond:
ask for name and birthday

Information on your n8n setup

  • n8n version: 1.67.1
  • Database (default: SQLite): SQLite
  • n8n EXECUTIONS_PROCESS setting (default: own, main): default
  • Running n8n via (Docker, npm, n8n cloud, desktop app): Docker
  • Operating system: Windows

Hi @ocus

Thanks for posting here and welcome to the community! :tada:

So you basically have two tasks that your chatbot needs to do. You can use the AI agent only for the task of “getting the required information” from the user and the actual “parsing” can be done by a LLM Chain afterwards.

Here’s what this would look like:

A few notes:

  • The AI agent gets simple instructions so all it has to do is converse with the user until the required information has been obtained (You don’t have to mention {{ $json.chatInput }} in the system message, as this will come from the prompt parameter automatically)
  • The Chat Memory Manager helps the agent to keep track of the “iterations” with the human
  • This is quite a complex task to do for an AI so you probably will see differences in the models. Ollama models may not perform as strongly as an OpenAI model for example
  • You could probably connect an email tool node for the agent to send the email directly, though that would not include the correct parsing, so for now you’d have to go a more linear approach and do the emailing at the end of the workflow after the LLM Chain

Hi @ria

Thanks for the reply!

Sadly I’m still struggling to accomplish the task.

I want to achieve something like this:

  1. Ask the user about their name and birthday.
  2. If either info is missing, ask them.
  3. If, and only if, both are retrieved, then pass a JSON result of it.

I switch the model (as you suggest, thanks!) to Gemini and it can successfully ask the user to get the desired information. (Can disconnect the IF node to test the result)

Now I want to add the IF node to my flow control to determine if the chat model has complete its part and pass the final result.
If true, then connect it to a Structure output parser to extract the JSON.
If false (meaning either name or birthday is still missing) then continue the conversation to ask further questions.

The problem now is that I can’t manage to set it up right for the false path to continue the conversation.

Any suggestion/advice is appreciated, thanks!

Hi Ocus,

I would be intresting in how we can do this as well. I may try too :slight_smile:

Thought 1: more detailed flow
Maybe it’s too complex for one single AI agent. And because your flow human/machine interaction is well defined, making this logic more visible, more guided could help.
In Microsoft Copilot they have a guided logic called “Topic”. When the user trigger a specific word, the answer follows a specific pattern.

Thought 2: did you test all agents ?
I don’t know what it means but it makes me think that we must test all types of AI agent nodes

Thought 3: cheat with RAG ?
Maybe creating a RAG with empty database, thus the only data the Agent gets is the data coming from user ?

Thought 4: Master the prefix !
Quote from the link: Basically, this changes the tool response prompt if the tool returns with the “CONTEXT:” prefix

Good luck

1 Like

Update: I succeed!

Thanks Ria & @Garry_Jakubiak for the insight.

The problem I was facing is that, I can get the LLM to ask for the info I want (when disconnecting the IF node), but struggle to ask further questions (like a conversation style) if the info is still lacking when added the IF node.

I end up adding a Basic LLM Chain node to the false path to repeat the reply from the AI Agent node, and ta-da!

The whole process will be look like this:

I store the info into Redis and also add a path to manually inspect the data I store.

The next step for my use case will be trying to call this module from another n8n flow and return the result from here, but I guess that will be another topic to ask. :blush:

4 Likes

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.