Support for Stream in HTTP Request node and OpenAI node

The idea is:

I would like to suggest adding support for Stream in the HTTP Request node in n8n. Adding support for Stream would greatly improve the experience of using n8n with APIs that require streaming data, such as ChatGPT.
I have tried using the ChatGPT node, but it doesn’t seem to support streaming. If I use a Code node instead, the HTTP-related request libraries are missing, making it impossible to use.

My use case:

To create a ChatGPT bot and it can provide responses in real-time, even if the first response is not a complete output. This can avoid the issue of having to wait for the full output before being able to see the content.

I think it would be beneficial to add this because:

I hope that both the HTTP Request node and the OpenAI node can support streaming functionality so that we can use APIs that support streaming and handle larger datasets.

Any resources to support this?


PS: English is not my native language, so please let me know if there is anything in my suggestion that is unclear.