Hello everyone,
I have an n8n workflow with the following structure:
-
It gets triggered by a webhook.
-
It processes the data using the Gemini chat model.
-
It sends a response back using the “Respond to Webhook” node.
The workflow generally completes in under a minute. However, the Gemini model sometimes gets stuck and fails to return a response or an error. This causes the workflow to run until it hits the 2-minute timeout limit.
I am looking for a way to handle this specific timeout scenario by sending a custom response to the original webhook.
Here are my constraints:
-
I cannot change the AI model (I must use Gemini).
-
Increasing the 2-minute timeout is not a viable option, as it is already more than enough time for a successful execution.
I have considered using an “Error Workflow,” but I am unsure if a timeout is considered a trigger for it.
Could someone guide me on the best way to achieve this?
Thank you
Hey @binesh_next !
I think that n8n treat alll suchh conditions as execution failures…
Soo if youur workflow hits the Timeout Workflow limit in Workflow Settinngs, that failed execution should trigger your Error Workflow, just like other failures.
Cheers!
Hey @Parintele_Damaskin,
Thank you so much for your fast reply!
I tested the Error Workflow solution, but unfortunately, it doesn’t seem to catch this specific timeout scenario.
It appears that n8n’s built-in execution timeout triggers first, immediately halting the process. Instead of activating the Error Workflow, it sends its own default response with a 500 status code, like this:
{
"code": 0,
"message": "The execution was cancelled because it timed out"
}
However, my goal is to intercept this event and respond with our project’s standard response format, which uses a 200 status code:
{
"code": 408,
"msg": "TIMEOUT_ERROR",
"data": null
}
Is there a way to override n8n’s default timeout response or force it to trigger the configured Error Workflow instead of sending its own message?
I’d appreciate your help once more!
1 Like
I tried a few more things that came to my mind:
1. Create a custom timeout with wait node
I tried creating two parallel branches, one with my AI Agent node and one with a wait node. I wanted the wait node to act as a timeout. my plan was that these two branches run as same time and the one who finishes first, sends the respond.
This failed because it starts executing Branch A and after finishing that, it started Branch B.
2. Sub-Workflow Parallelism
After some research, I put the AI Agent node and the Wait node into two separate sub-workflows and called them concurrently from the main workflow.
This did run them in parallel, but both sub-workflows complete in the background, so the logic of ‘complete only the earliest one’ fails here and I found no way to cancel the AI Agent sub-workflow if the Wait sub-workflow “wins” the race. The Gemini process would just keep running uselessly in the background.
Is there a per-node timeout setting (especially for agent/API nodes) that I’m completely missing? This seems like the simplest solution if it exists.
Or is there any way to override n8n’s default timeout error response on a per-workflow basis?
I feel like I’m very close but missing one key piece of the puzzle. Any guidance or pointers would be massively appreciated!
Thanks in advance.
I am afraid that your case hit a wall…
No per‑node timeout in the docs for AI Agent1 / Gemini…
No way for your Respond to Webhook node to “win a race” and cancel Gemini…
No way to override the 500 timeout response or route it through Respond to Webhook….
Maybe, if am not wrong… on node‑level timeouts where available (HTTP Request) + On Error / Continue On Fail + Stop and Error to deliberately fail and trigger your own error logic.
Cheers!
1 Like
Yes it seems I really did hit a wall with the native n8n features!
Just to close the loop for anyone who finds this thread in the future, I did find one functional, non-n8n workaround: using an Nginx reverse proxy.
By setting a proxy_read_timeout , I was able to intercept the hanging request and return our custom response.
While this solution worked perfectly , my team ultimately decided against it. So, our final decision was to compromise and let the front-end handle this , even though it wasn’t our original preference.
And you’re right about the HTTP Request node’s timeout options, it’s a shame the AI nodes don’t have that feature yet. That would have solved this instantly.
Appreciate your help. Cheers!
@binesh_next you could instead of letting the fronted solve the issue, just use the HTTP request node and send a custom API call to gemini instead of using the Gemini node. Then you’ll be able to use the custom timeout and error handling features. I think this is what @Parintele_Damaskin wanted to say by mentioning it. I would assume this is a overall cleaner solution for your specific case.
1 Like
@salmansolutions Thank you for clarifying.
We rely heavily on the high-level features of the AI Agent node , specifically its built-in prompt management and the structured output parser. Replicating all that functionality manually within an HTTP Request node, would be a significant refactor for us at this stage.
However, this is definitely something I’ll bring up with the team as a long-term solution if our current front-end workaround becomes problematic.
Thank you both for the fantastic advice and for helping me think through this problem!
1 Like
Hey @binesh_next , as already @salmansolutions explained my post…there isn’t a smaller, “one‑node” change that would give you a hard 90‑second cutoff on the AI Agent while preserving all its features, so your current plan (stick with Agent + parser, use a frontend/async workaround for now)….unfortunately .
Other complex “workarounds”(if there are any)then, are out of question …
Cheers!
1 Like