Support for Stream in HTTP Request node and OpenAI node

The idea is:

I would like to suggest adding support for Stream in the HTTP Request node in n8n. Adding support for Stream would greatly improve the experience of using n8n with APIs that require streaming data, such as ChatGPT.
I have tried using the ChatGPT node, but it doesn’t seem to support streaming. If I use a Code node instead, the HTTP-related request libraries are missing, making it impossible to use.

My use case:

To create a ChatGPT bot and it can provide responses in real-time, even if the first response is not a complete output. This can avoid the issue of having to wait for the full output before being able to see the content.

I think it would be beneficial to add this because:

I hope that both the HTTP Request node and the OpenAI node can support streaming functionality so that we can use APIs that support streaming and handle larger datasets.

Any resources to support this?

ChatGPT

PS: English is not my native language, so please let me know if there is anything in my suggestion that is unclear.

Hello, I have exactly the same streaming problem as you. Can you tell me how you did it?

This topic has also been a major issue for me. There hasn’t been any news from here for months, but I’m sure there are many users who have shifted to other solutions on the AI side. This doesn’t seem to be too complex for the n8n developers. If that’s the case, I hope they will consider it.

Did you found a work around since?

I don’t think a workaround would be easy to find. With so many chatbot builders on the market, we need to have a buitin function in n8n to use assistant stream in chat.