Claude 4.5 on n8n Cloud fails after 1 min (300000 ms timeout) — how to increase execution time?

Hey everyone :waving_hand:
I’m using n8n Cloud (v1.115.3) and running a workflow with the Anthropic “Message a Model” node (@n8n/n8n-nodes-langchain.anthropic).
My runs are large text-processing jobs that take a bit longer — previously they worked fine — but now I consistently get this error after either 90 seconds, 3 minutes or 5 minutes:
{
“errorMessage”: “The connection was aborted, perhaps the server is offline”,
“errorDetails”: {
“rawErrorMessage”: [
“timeout of 300000ms exceeded”
]
},
“n8nDetails”: {
“nodeName”: “Message a model1”,
“nodeType”: “@n8n/n8n-nodes-langchain.anthropic”,
“nodeVersion”: 1,
“resource”: “text”,
“operation”: “message”,
“itemIndex”: 0,
“time”: “28/10/2025, 17:11:35”,
“n8nVersion”: “1.115.3 (Cloud)”,
“binaryDataMode”: “filesystem”,
“stackTrace”: [
“NodeOperationError: The connection was aborted, perhaps the server is offline”,
" at ExecuteContext.router (/usr/local/lib/node_modules/n8n/node_modules/.pnpm/@n8n+n8n-nodes-langchain@file+packages+@n8n+nodes-langchain_fc553bfe732254ec5207074cf9e2ceb7/node_modules/@n8n/n8n-nodes-langchain/nodes/vendors/Anthropic/actions/router.ts:56:10)“,
" at ExecuteContext.execute (/usr/local/lib/node_modules/n8n/node_modules/.pnpm/@n8n+n8n-nodes-langchain@file+packages+@n8n+nodes-langchain_fc553bfe732254ec5207074cf9e2ceb7/node_modules/@n8n/n8n-nodes-langchain/nodes/vendors/Anthropic/Anthropic.node.ts:15:10)”,
" at WorkflowExecute.executeNode (/usr/local/lib/node_modules/n8n/node_modules/.pnpm/n8n-core@file+packages+core_@[email protected]_@[email protected]_08b575bec2313d5d8a4cc75358971443/node_modules/n8n-core/src/execution-engine/workflow-execute.ts:1093:8)“,
" at WorkflowExecute.runNode (/usr/local/lib/node_modules/n8n/node_modules/.pnpm/n8n-core@file+packages+core_@[email protected]_@[email protected]_08b575bec2313d5d8a4cc75358971443/node_modules/n8n-core/src/execution-engine/workflow-execute.ts:1274:11)”,
" at /usr/local/lib/node_modules/n8n/node_modules/.pnpm/n8n-core@file+packages+core_@[email protected]_@[email protected]_08b575bec2313d5d8a4cc75358971443/node_modules/n8n-core/src/execution-engine/workflow-execute.ts:1676:27",
" at /usr/local/lib/node_modules/n8n/node_modules/.pnpm/n8n-core@file+packages+core_@[email protected]_@[email protected]_08b575bec2313d5d8a4cc75358971443/node_modules/n8n-core/src/execution-engine/workflow-execute.ts:2292:11"
]
}
}
I know Anthropic models (like Claude 4.5 Sonnet) support long generations, and this used to run longer before timing out — so I suspect n8n’s HTTP request timeout (300 000 ms) is cutting it off.
However, I can’t find any setting in the Message a Model node or workflow configuration to adjust this or enable streaming.
Here are my available options in the node:
Include Merged Response
Code Execution
Web Search
Web Search Max Uses
Web Search Allowed Domains
Web Search Blocked Domains
Output Randomness (Temperature, Top P, Top K)
Max Tool Calls Iterations
No mention of streaming or timeout control.
Questions:
Is there any way on n8n Cloud to increase or bypass the 300 s timeout for Anthropic nodes?
Is streaming supported in the @n8n/n8n-nodes-langchain.anthropic implementation — and if yes, how can I enable it?
If not, is there a way to wrap this call (e.g., via Function or HTTP Request node) to keep the socket alive longer?
Thanks in advance :folded_hands:
— Chris

Describe the problem/error/question

What is the error message (if any)?

Please share your workflow

(Select the nodes on your canvas and use the keyboard shortcuts CMD+C/CTRL+C and CMD+V/CTRL+V to copy and paste the workflow.)

Share the output returned by the last node

Information on your n8n setup

  • n8n version:
  • Database (default: SQLite):
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app):
  • Operating system:

On n8n Cloud, long-running jobs (over 1-2 minutes) are likely to fail due to platform timeouts. The best practice is to use an asynchronous pattern: trigger the job, return immediately, and poll for completion.

If your workflow does not respond within this time, the request will fail with a 524 error.

Or…

If you need to process very large jobs synchronously, consider self-hosting n8n, where you can control and increase the execution and request timeouts.

Hey, thanks a lot for the clarification :folded_hands:

That actually makes sense now. I’ve been hitting the same 300-second ceiling pretty consistently.
Unfortunately, self-hosting isn’t really an option for me. I tried running n8n on Hostinger with Docker, but every time I updated or restarted the container it broke the environment. I’m not a developer, so keeping that setup stable just isn’t realistic for me long-term.

That’s why I was hopeful when the AI Agent node added a “streaming” option — I’ve got it enabled in my agent, but it doesn’t seem to actually stream. The run still waits for the full completion and then times out if the generation takes too long (around 10–15 k tokens).

So I’m wondering:

  • Has anyone managed to get streaming to actually work with the AI Agent node?

  • Does anyone know how to get this on n8n’s radar or flag it so the Cloud team might take a look?

This workflow is a big part of my project — it’s generating full legal dossiers — so even a temporary workaround or a Cloud-specific fix would be massively helpful.

I get that most of you self-host and Cloud probably isn’t your main focus — but if anyone knows how to surface this to the Cloud folks, that’d mean a lot.

Thanks again for the help :folded_hands:

Really appreciate any tips or ways to bump this where it’ll be seen.

Ah, I just realised what you meant by this part:

“On n8n Cloud, long-running jobs (over 1–2 minutes) are likely to fail due to platform timeouts. The best practice is to use an asynchronous pattern: trigger the job, return immediately, and poll for completion.”

Looking into that now — thanks for pointing it out.
Just to make sure I understand: are we talking about using a Webhook node here?

How would that work in practice?
Would the Webhook node need to sit before the AI Agent (to trigger it asynchronously), or after it somehow?
After doesn’t quite make sense to me since the agent only sends data once it’s done, right?

Thanks, mate :folded_hands:t3:
I am already on a version that supports streaming. I’m able to enable it. Hmm….

1 Like

Ok, any question you have, feel free to create a new topic.

I’d any of the answers fits your needs, please mark it as solved.

:slight_smile: