The idea is:
When using the “When chat message received” + “AI Agent” + large language model in n8n for chatting, the response from the large language model can be dynamically output in a streaming manner, similar to a typewriter. However, currently, the response from the large language model is only displayed all at once at the end. It is recommended to address this issue to enhance the user interaction experience.
My use case:
using the “When chat message received” + “AI Agent” + large language model in n8n for chatting
I think it would be beneficial to add this because:
Implementing a streaming output effect during conversations can enhance the user experience.