Built-in solution for async polling with long-running AI workflows?

Hi n8n community,

I’m working with a webhook-based chat interface that integrates with an AI Agent node, and I’m running into browser timeout issues. I’d love to know if n8n has any built-in patterns or nodes to handle this scenario elegantly.

Current Setup:

  • Webhook trigger receives chat messages from a frontend

  • AI Agent node processes the message (can take 30-60+ seconds)

  • “Respond to Webhook” node sends the AI response back to the frontend

The Problem:

Modern browsers enforce fetch timeout limits (typically 30-60 seconds), which means users often experience timeouts before the AI Agent completes processing, even though the workflow continues running successfully in the background.

What I’m Looking For:

Does n8n have any built-in functionality for handling asynchronous polling patterns? Ideally something that would allow me to:

  1. Immediately return a job/request ID to the client

  2. Continue processing the AI workflow in the background

  3. Provide a separate polling endpoint to check status and retrieve results

I’ve attached a screenshot of my current workflow structure. Before I implement a custom solution with multiple webhooks and external storage, I wanted to check if there’s a more n8n-native approach I might be missing.

Has anyone else solved similar long-running webhook scenarios? Any guidance would be much appreciated!

Thanks in advance for any insights.
Luca

1 Like

I’m facing the same problem, did you find any solution?

Having the same issue. Any thoughts?

Hello! Not sure if you have already solved this, if you haven’t I literally encountered this yesterday and managed to solve it with help from Lovable. I am building a RAG chatbot with Lovable as front-end and n8n as back-end. Same construct as what you described: Lovable sends a POST request to n8n’s webhook node –> Lovable receives a response back from n8n –> Response is shown on Lovable’s front-end UI. However where I got stuck was exactly the browser time-out issue which couldn’t be solved no matter how much Lovable codes the timeout window to be (e.g. 5mins, or some Edge function that has 3 mins window).

What Lovable did for me was this:

Webhook callback pattern (recommended for long operations):

    • (After POST request is sent to n8n) n8n immediately responds with “processing started”

    • When done, n8n calls a separate webhook (callback URL end point) with the result

    • Requires implementing a callback endpoint and updating the UI to poll/listen for results

Lovable understood the back-end things it needed to change:

Changes Needed on Lovable Side

  1. Create chat-callback edge function to receive n8n’s result

  2. Create database table to store processing status & results

  3. Update frontend to poll/subscribe for results

  4. Update chat-proxy to pass callbackUrl and handle 202 responses

Below were the changes I had to implent on n8n to make the above solution work:

  • Instead of the typical webhook in –> AI Agent/ whatever back-end processing work –> Response to Webhook node, you need to immediately provide a ‘Respond to Webhook’ response back to the source (in my case, Lovable), and separately send another POST request whenever your processing is done (see mine comes out of the AI agent)

This is what the ‘Respond to Webhook’ configuration looks like:

Here’s the JSON

{
“status”: “Processing”,
“conversationId”: “{{ $json.body.conversationId }}” ,
“message”: “Your request is being processed. You’ll receive a response shortly.”
}

As for the ‘POST AI Response to Lovable’, that’s just a HTTP request node:

JSON body I used:

{
“conversationId”: “{{ $(‘Webhook’).item.json.body.conversationId }}”,
“message”: “{{ $json.output.replace(/\n/g, ‘\n’) }}”
}

Hope this helps?

1 Like

Nice workaround!

P.S

n8n executes branches sequentially (top-to-bottom, left-to-right), not in parallel.

1 Like