Send SSE (Server Side Events) on the Respond To Webhook

Is there a way for the Webhook Response node send SSE similar to open ai?

It looks like your topic is missing some important information. Could you provide the following if applicable.

  • n8n version:
  • Database (default: SQLite):
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app):
  • Operating system:

Hi @igcorreia! Thanks for reaching out! Are you referring to the Respond to Webhook node or the Webhook node, itself? We currently have the SSE trigger node which you can set to listen for updates sent by a server.

@Ludwig I assume that he is asking for the ability to send SSEs to clients. Since the Response to Webhook node only triggers once per execution, it would not be able to provide periodic updates over a long-running connection.

I have not been able to find a node that can send SSEs however it would be great to have this functionality. The alternative is for the client to constantly poll the server which adds unnecessary overhead and is less performant due to the delays between HTTP requests.

@Ludwig on the RESPOND TO WEBHOOK. I am using the OPEN AI API and I would like to send to the user the “realtime” writing. They are using SSE. I would like to proxy does SSE events coming from OPEN AI and send them as the same format or event intercept them and tweak them before sending them. @BB9234movwZ4T5c is describing it correcly :slight_smile:

Question title updated.

@igcorreia There are two workarounds that I can think of right now.

To track the user’s original request, you can:

  1. generate an identifier (e.g., UUID), send it with the initial request, and store the identifier in a database (e.g., Redis) along with its status; or
  2. return an identifier (e.g., the n8n workflow ID) to the user once the initial request has been received.

The client can then make one or multiple subsequent requests as follows:

  1. Send HTTP requests to a workflow at an appropriate frequency (e.g., every 2, 5, or 10 seconds). Configure your response content or response code to reflect whether the underlying workflow is still processing, has completed, or has failed. Obviously, if it has completed, return the result.
  2. Send a single HTTP request with a long timeout. The workflow will loop and use the Wait node until the underlying execution has completed. It will then use the Respond to Webhook node to return the result.

Regardless of the identifier approach that you choose, the client can send this identifier with its subsequent request(s). Your status queries can check the database to see if it has been processed or use the n8n API node to see if the workflow is still executing.

I appreciate the detailed explanation you provided about how Replicate and Hunning Faces work. I am trying my best to avoid using them. Meanwhile, I am in the process of building an intermediate PHP API in an external service. I can complete 99% of what I need using N8N, however, there is one crucial part that I am unable to do using N8N.

As all my future projects require the OpenAI APIs, I think my small PHP API will be heavily used. :rofl:

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.