Is there a way to pass parameters, i.e. metadata, to the Chat Trigger when the chat is not embedded on a external page, but with the hosted chat option?
I know I’m able to pass metadata using the metadata option as described in the topic below. But is there a way to pass metadata using the n8n hosted chat option?
Information on your n8n setup
n8n version: 1.54.0
Database (default: SQLite): SQLite
n8n EXECUTIONS_PROCESS setting (default: own, main): own
Running n8n via (Docker, npm, n8n cloud, desktop app): k82
I have a “Hosted Chat” Chat Trigger which is publicly available, as shown in the screenshot below. When sending a message, and checking the execution of the workflow, I can see metadata is passed from the Chat to the execution, and those are basically user metadata.
Say this chat is accessed through https://url/abc/chat
I was thinking on passing query parameters, such as https://url/abc/chat?q=123, so I can reutilize the same webhook, but be able to pass parameters to it.
One use case would be for a Question and Answer Chain plugged to a Vector Store. I want to have many chats, each one connecting to a different Collection Name. Currently the only way is to create many Chat Webhooks, which doesn’t scale any well. I don’t believe that should be the case, since I only want to change a single parameter between chats.
I know what I’m trying to do is doable via external embedded chats, as explained in the referred topic, as you can pass metadata as a parameter, but I’m looking for a way to do it on the n8n hosted chat as well.
Hello @miguel-mconf, I just tried and indeed, query parameters are not passed to the metadata field. This is a feature you can ask for: Create a new topic and select Feature Requests.
That beeing said, there is a workaround:
Create a 1st workflow with embedded chat trigger
Create a simple http page with the embedded chat code
Create a 2nd workflow with a webhook trigger (GET)
Dynamicaly compute the http page source code based on the query parameters
Hi @LucBerge , thank you very much for your answer. I’ll create the feature request as suggested, and your workaround is going to be very useful for the time being.