The first two steps would be a workflow that starts with the form (as you mentioned, the user fills it in), then it gets analyzed by the agent, and finally the user is directed to the chat link, at that point a second workflow begins..
But if I think about it, I’d actually keep it all within a single workflow and just start directly with the chat (for example, on Telegram).
It could begin with a welcome message, then the user fills in the form, and continues within the same chat.
That way everything stays in one place and within one workflow.
Thanks @mohamed3nan, I appreciate knowing it’s possible, but it’s the actual execution of this that’s difficult, and that’s what I need help with.
My specific questions:
How do I take the user directly from the form submission to the chat agent, where it then prompts the user to answer it’s first question about the form content? Right now I am redirecting the user to a chat trigger node’s URL. I don’t have any other chat interface available beyond what n8n provides. There won’t be Telegram or anything like that. n8n is it. It doesn’t make sense to start with a chat, because it just risks the user taking it off track, and the agent can’t do anything without the form data being complete anyway.
Before redirecting the user to the chat, I create a session ID, save the form data in Postgres keyed by the session ID so it can be retrieved later, and then pass the session ID to the chat via a URL query parameter. What’s the best way to provide the form contents to the agent for review? It’s not clear what my graph needs to look like so operations happen in the right order.
I’ll have more, but these seem to be the most pressing questions. Thanks.
For the first question, since you’re not planning to use Telegram or anything similar, you’ve already answered yourself in the second question.
You’re saving the form data in the database with session ID then redirecting the user to the chat link with that session ID as a parameter.
The next step in the workflow would be to get the form data based on session ID and then provide it to the agent as part of the context or the prompt..
Side note: keep in mind that the link is exposed and potentially vulnerable, since any user could modify the session ID and start a chat with the agent. You may want to secure it.
I guess my current challenge is getting caught up in the particulars of how agents and chats operate, and it seems difficult to make them behave seamlessly with the form. I would like the user experience to be:
User fills out the form
Upon submission, they immediately get a response from the agent on a new page saying it’s processing the form
Questions from the agent appear after the form is processed, and the only chat responses the user gives are always in response to an agent question–there is no open discussion.
Once the agent has enough info to fill in gaps from the form, it combines the content together into the resulting dataset, writes it to the DB (or project management system) and the interaction ends.
I’m struggling to have the chat start this way after the end of the form–it’s not at all clear what the necessary combination of settings are in the chat trigger, agent, and/or response nodes to enable this. Right now it just sits there until the user types something.
Where should the query node go to retrieve the saved form content–between the trigger node and the agent? Or should this be a tool node instead?
What triggers the chat to end once the agent thinks it has enough info, so that the workflow can continue? How do I connect the agent to the next step in the workflow?
Also, it appears that I cannot actually pass data to a hosted chat in the URL and have that be accessible downstream from the chat trigger node so that I can retrieve the form with a query. Any ideas how to transition seamlessly from a form to a chat, while also passing session-specific data between them?
Now, if you’re using the n8n Hosted Chat, which I guess, you’ll need to ask the user for their session ID that you created from the form, then you can use it to load the form data from the database and use that in the following nodes.
Since you want this with “no open discussion”, you should keep the flow responding through the Respond to Chat node until the AI agent steps..
Once the AI agent sends a reply, you pass that reply back to the user and continue in the same way.
I think the missing piece here is the Respond to Chat node, along with setting the Chat Trigger node option Response Mode to “Using Response Nodes”
Actually, I’m now embedding the chat in a web page with a webhook per the embedded mode instructions you gave above.
So, I should be able to pass the session ID to the chat now. However, I’m having trouble configuring the chat page, as there’s no entry field showing up for the user to type anything. The prompts coming back from my agent are not correct, either.
What’s really difficult here is testing–there are multiple disconnected workflows and I can’t look at the data in test mode because half my workflow doesn’t run unless the workflow is active. I can’t see what it’s doing when it’s not behaving as expected (which right now is all the time).
Hi @trigrman
Yes, you need to set the workflow to production because there are multiple triggers in the same workflow, and this will also ensure everything is running correctly.
Now, from the screenshot you sent , since you’re following the mentioned workaround, something is still missing which is (the HTML generation)
in that HTML template you’ll need to set:
webhookUrl (this is the chat URL)
metadata (you can set it manually or automatically)
also I’ve updated the lib links and also enabled automatic reading of the meta data.
You can use this workflow directly:
Note (to make the logic clear):
At the end of the form, you’ll send the user the webhook URL (which will have the query parameters) that is generating the HTML page that contain the chat that will triggered and you will find the parameters under the metadata..
Actually, I put the HTML directly in the webhook response node, and that works fine. But I tried yours, also, and they behave the same.
However, neither version gives me a working chat, so something is wrong in my chat configuration. I don’t get a working chat window because there is no user entry field for some reason–I’m not sure why that’s not there.
Also, I cannot get the agent to do its thing and analyze the text before the user types anything. Is there a way to not prompt the user for input until the agent works on the content first, and then only prompts for input once that’s done? I’m just stuck with the initial prompt from the chat trigger, which frankly I never want the user to see.
I’ll need to prep some things before I can share the workflow, but will do so shortly.
Your suggestion to add the agent to the form workflow is exactly what I’m trying to do. Are you suggesting instead to split the agent into an analysis part and an interactive part? That might make sense to do, but I was hoping to test a single agent for this to see whether it was required, and I’ve never gotten it to work so that I could do this.
If I were to do that, the interactive agent needs access to the first agent’s analysis of what is missing. And frankly a 3rd agent might then be needed to rebuild the answers into the form structure. Can you suggest a workflow that would do all this? Note that my problems getting the interactive agent working seamlessly with the form entry would still exist.
Here’s my workflow below. Just so that I could test the agent part of the workflow easily, I’ve set the chat to “hosted” and have hardwired the select statement to retrieve a known form entry to feed into the agent. Here are my current questions/issues:
When running in embedded mode within the chat widget, the chat doesn’t give me a user prompt field to type in.
When running just the chat/agent portion of this (in hosted mode), I need to type something to get the chat started. Instead, I want the agent to make the first move. How do I do this?
Is this the right way to inject data into the agent, by putting the Postgres query to retrieve form contents inline from the chat trigger? Is it better as a tool?
I get the same question over and over from the agent, and looking at the data flow it looks like it’s receiving a new copy of the form data with every chat interaction. This is partly what’s leading me to think the Postgres select doesn’t belong there.
Do I need to do the analysis of the form contents prior to the chat beginning? If so, how do I share this context with the agent–via the simple memory? At what point do I need to switch this to Postgres chat memory?
What’s the best way of taking responses to the agent’s questions and compiling that together into cohesive output that can be written to a templated document?
I know that’s lots of questions–I’ve tried asking LLMs these things and I don’t think any of them know n8n well enough to give truly valid answers.
Let’s stick with one method “embedded mode” to keep things clear and avoid confusion with each other : )
I noticed some broken links in your workflow, it looks like you didn’t copy the last sample I uploaded.
As far as I know, this isn’t possible because the chat “Trigger” needs a user input to fire. Instead, you can control the initialMessages in the HTML, So you’ll probably need to add an AI node within the form submission save the AI output in the database then inject in again.
It’s fine this way since you’re just pulling form data based on the parameter and then load the data in the memory.
Yes, that happens because everything isn’t in a single flow, there’s the form and then the chat that depends on it. You’ll need to add another column in the database (for example AiOutput) and inject that both into the HTML start messages and into the memory, along with the system and user prompts, so it can be reused in the chat.
I think this will require some careful planning of the database and workflow design..
Initially, you can use the “Chat Memory Manager” to extract all responses and process them in another workflow that outputs a structured result in the format you need.
For now, I just edited the workflow so it works, even though it still needs a lot of optimization and probably better handling of memory/database,
Just to note: everything is running in production now, so that should remove any confusion with the links, BTW here’s the form you can test, I’ll keep this link active until you confirm that everything’s working for you..
So, I also made my own copy and fixed a bunch of things, and put the URLs back to point to my own machine (production mode). My chat still does not work–I’m not getting a user entry field so I can respond to the chat.
Normally I’m a great troubleshooter, but I find tracing production runs of these workflows difficult–it doesn’t seem to record the progress of any entire run when the workflow has several disconnected flows inside it. I cannot see what’s happening at all in the chat portion of the workflow, for example–any tips on getting access to the logs there?
The model is overloaded. Please try again later
I think I reached the model limit, I switched to Flash 2.5, so it should work now.
That’s why I shared a live link, it’s actually working with the same workflow I shared above, After you submit the form, you’re automatically redirected to the chat like this
Yeah, this is really odd. I don’t get the same “redirect” message when I run it on my own machine, it just jumps into a chat, minus the “Type your question” prompt. I don’t even have that portion of the page showing up. I literally just copied your code and changed the URLs, I didn’t change anything else about the chat interaction, so not sure what’s causing the issue. Any ideas?
Oddly, I don’t get the “redirecting” message when running on my own box, it just goes straight from the form to the chat window. Perhaps that’s just due to network latency, which is non-existent in my case.
Anyway, my chat window looks like this when run using the production URLs all around (none are https in my case):
Yes, I updated some of the text fields, but I left the chat workflow you provided the same. When I check the Executions tab to look at the production run, it has logged the Form workflow, and the Webhook workflow, but NOT the chat workflow, so I can’t look at that to see what’s happening. I’ve double-checked the URLs and all is correct, so I’m starting to wonder if the lack of SSL is a contributing factor, as a bunch of the chat interface is simply not rendering.