Built-in solution for async polling with long-running AI workflows?

Hi n8n community,

I’m working with a webhook-based chat interface that integrates with an AI Agent node, and I’m running into browser timeout issues. I’d love to know if n8n has any built-in patterns or nodes to handle this scenario elegantly.

Current Setup:

  • Webhook trigger receives chat messages from a frontend

  • AI Agent node processes the message (can take 30-60+ seconds)

  • “Respond to Webhook” node sends the AI response back to the frontend

The Problem:

Modern browsers enforce fetch timeout limits (typically 30-60 seconds), which means users often experience timeouts before the AI Agent completes processing, even though the workflow continues running successfully in the background.

What I’m Looking For:

Does n8n have any built-in functionality for handling asynchronous polling patterns? Ideally something that would allow me to:

  1. Immediately return a job/request ID to the client

  2. Continue processing the AI workflow in the background

  3. Provide a separate polling endpoint to check status and retrieve results

I’ve attached a screenshot of my current workflow structure. Before I implement a custom solution with multiple webhooks and external storage, I wanted to check if there’s a more n8n-native approach I might be missing.

Has anyone else solved similar long-running webhook scenarios? Any guidance would be much appreciated!

Thanks in advance for any insights.
Luca

I’m facing the same problem, did you find any solution?