Hey,
I am looking to create a chatbot for in-depth data analysis of user sales data. That is, the user would (via our website) open a chat window, ask questions and the LLM would respond with data analysis.
In the background in N8N we would have some webhooks to field the user query, an http request to load up the users data, then a Agent with LLM and tools to analyse the data and respond.
We were thinking that we would first need an LLM to understand the users intent, i.e. is he asking for sales data, or product specific data and did he provide the date range for the required data. Then we would IF node the intent and call an HTTP request for the correct set of data. We would then call a ‘specialist’ Agent with LLM that is prompted and has tools for that specific data (to control the agent and avoid hallucinations).
We have that going it seems. Where it gets complicated is that the continued interaction with the user, asking more in-depth questions and then knowing we need to route him to his existing agent with the existing data and the previous questions asked. If somewhere in the conversation the user changes intent (moves from sales to product data questions) then the workflow needs to reroute to a different agent and load the relevant data set from http request.
Does what I am referring here sound plausible or am I just going down an unnecessary path? The idea is that we need specialist agents for each data set that we gear up with tools (or external API calls to python) to do in-depth analysis of specific data rather than relying on an LLM, would rather control the analysis and let the LLM respond with a summary then relying on the LLM to do statistical analysis by itself.
Much appreciated.