I’m working with a webhook-based chat interface that integrates with an AI Agent node, and I’m running into browser timeout issues. I’d love to know if n8n has any built-in patterns or nodes to handle this scenario elegantly.
Current Setup:
Webhook trigger receives chat messages from a frontend
AI Agent node processes the message (can take 30-60+ seconds)
“Respond to Webhook” node sends the AI response back to the frontend
The Problem:
Modern browsers enforce fetch timeout limits (typically 30-60 seconds), which means users often experience timeouts before the AI Agent completes processing, even though the workflow continues running successfully in the background.
What I’m Looking For:
Does n8n have any built-in functionality for handling asynchronous polling patterns? Ideally something that would allow me to:
Immediately return a job/request ID to the client
Continue processing the AI workflow in the background
Provide a separate polling endpoint to check status and retrieve results
I’ve attached a screenshot of my current workflow structure. Before I implement a custom solution with multiple webhooks and external storage, I wanted to check if there’s a more n8n-native approach I might be missing.
Has anyone else solved similar long-running webhook scenarios? Any guidance would be much appreciated!
Hello! Not sure if you have already solved this, if you haven’t I literally encountered this yesterday and managed to solve it with help from Lovable. I am building a RAG chatbot with Lovable as front-end and n8n as back-end. Same construct as what you described: Lovable sends a POST request to n8n’s webhook node –> Lovable receives a response back from n8n –> Response is shown on Lovable’s front-end UI. However where I got stuck was exactly the browser time-out issue which couldn’t be solved no matter how much Lovable codes the timeout window to be (e.g. 5mins, or some Edge function that has 3 mins window).
What Lovable did for me was this:
Webhook callback pattern (recommended for long operations):
(After POST request is sent to n8n) n8n immediately responds with “processing started”
When done, n8n calls a separate webhook (callback URL end point) with the result
Requires implementing a callback endpoint and updating the UI to poll/listen for results
Lovable understood the back-end things it needed to change:
Changes Needed on Lovable Side
Create chat-callback edge function to receive n8n’s result
Create database table to store processing status & results
Update frontend to poll/subscribe for results
Update chat-proxy to pass callbackUrl and handle 202 responses
Below were the changes I had to implent on n8n to make the above solution work:
Instead of the typical webhook in –> AI Agent/ whatever back-end processing work –> Response to Webhook node, you need to immediately provide a ‘Respond to Webhook’ response back to the source (in my case, Lovable), and separately send another POST request whenever your processing is done (see mine comes out of the AI agent)