Can n8n have a streaming of responses in a real-time during the chat process,

The idea is:

When using the “When chat message received” + “AI Agent” + large language model in n8n for chatting, the response from the large language model can be dynamically output in a streaming manner, similar to a typewriter. However, currently, the response from the large language model is only displayed all at once at the end. It is recommended to address this issue to enhance the user interaction experience.

My use case:

using the “When chat message received” + “AI Agent” + large language model in n8n for chatting

I think it would be beneficial to add this because:

Implementing a streaming output effect during conversations can enhance the user experience.

Any resources to support this?

Are you willing to work on this?

definitely support that, I’m missing that as well. Depending on the bot’s response length it makes a huge difference of several seconds until any result is being displayed.

1 Like