Server goes down while runing complex workfllow

Describe the problem/error/question

When I run my workflow which handles a lot of data , about 200 PDF files and over 10 open AI API calls and much more, the workflow takes about 2 hours to complete, since every Open AI execution takes about 10 minutes given the split I have. I keep geeting , Connection lost and Unknown errors… when executing my workflow.

What is the error message (if any)?

The connection was aborted, perhaps the server is offline [item 4]

Please share your workflow

Didnt let me its over 34000 chars but this is the problem area.

Share the output returned by the last node

The connection was aborted, perhaps the server is offline [item 4]

here after 68 minutes: Imgur: The magic of the Internet

Information on your n8n setup

  • n8n version:1.80 I just upgraded to 1.83
  • Running n8n via : n8n cloud

Based on the error message, it seems the OpenAI server returned an error, not your workflow. Do you mind sharing a screenshot of the actual error?

Below the error details:

Error details

 From OpenAI
Error code

ECONNABORTED

Full message

timeout of 300000ms exceeded
 Other info
Item Index

7

Node type

@n8n/n8n-nodes-langchain.openAi

Node version

1.8 (Latest)

n8n version

1.83.2 (Cloud)

Time

3/19/2025, 12:59:46 PM

Stack trace

NodeApiError: The connection was aborted, perhaps the server is offline at ExecuteContext.requestWithAuthentication (/usr/local/lib/node_modules/n8n/node_modules/n8n-core/dist/execution-engine/node-execution-context/utils/request-helper-functions.js:991:19) at ExecuteContext.requestWithAuthentication (/usr/local/lib/node_modules/n8n/node_modules/n8n-core/dist/execution-engine/node-execution-context/utils/request-helper-functions.js:1147:20) at ExecuteContext.apiRequest (/usr/local/lib/node_modules/n8n/node_modules/@n8n/n8n-nodes-langchain/dist/nodes/vendors/OpenAi/transport/index.js:22:12) at ExecuteContext.execute (/usr/local/lib/node_modules/n8n/node_modules/@n8n/n8n-nodes-langchain/dist/nodes/vendors/OpenAi/actions/text/message.operation.js:230:21) at ExecuteContext.router (/usr/local/lib/node_modules/n8n/node_modules/@n8n/n8n-nodes-langchain/dist/nodes/vendors/OpenAi/actions/router.js:75:34) at ExecuteContext.execute (/usr/local/lib/node_modules/n8n/node_modules/@n8n/n8n-nodes-langchain/dist/nodes/vendors/OpenAi/OpenAi.node.js:16:16) at WorkflowExecute.runNode (/usr/local/lib/node_modules/n8n/node_modules/n8n-core/dist/execution-engine/workflow-execute.js:681:27) at /usr/local/lib/node_modules/n8n/node_modules/n8n-core/dist/execution-engine/workflow-execute.js:913:51 at /usr/local/lib/node_modules/n8n/node_modules/n8n-core/dist/execution-engine/workflow-execute.js:1246:20

Based on that, it seems like OpenAI failed to respond within 30 seconds. My guess would be that, it detected too many requests coming from your n8n instance’s IP and thus, throttled you for some time.

Any solutions ?

I’d recommend first confirming with OpenAI support if that’s indeed the real issue. Whatever I have shared above is a guess as I cannot see why their server returned a timeout, only they can answer that. Once it’s confirmed it’s the same issue, then we can try to work towards a solution. Otherwise, if OpenAI says it’s a different issue, we’ll have to adapt accordingly.

Basically I had to Host n8n myself and then I solved this issue, in N8N cloud I think they have some kind of timeout which you can deactivate in the hosted version

1 Like

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.