Intermittent ECONNRESET / socket hang up errors in HTTP Request node on Google Cloud Run (Docker)

Hi team,

I’m facing intermittent ECONNRESET / socket hang up errors when using the HTTP Request node.

Problem description

I am seeing frequent HTTP Request node failures that:

  • Occur intermittently (not on every execution)

  • Happen across multiple workflows

  • Affect different HTTP Request nodes each time (not always the same node)

  • Occur roughly once per day per workflow

The errors mostly happen in HTTP nodes where I call service APIs, for example:

  • Uploading files to Google Cloud Storage

  • Calling my own Cloud Run services via HTTP

Below is the error message:

The connection to the server was closed unexpectedly, perhaps it is offline. You can retry the request immediately or wait and retry later.”

{
“errorMessage”: “The connection to the server was closed unexpectedly, perhaps it is offline. You can retry the request immediately or wait and retry later.”,
“errorDetails”: {
“rawErrorMessage”: [
“socket hang up”,
“socket hang up”
],
“httpCode”: “ECONNRESET”
},
“n8nDetails”: {
“nodeName”: “Upload Renamed PDF to Same Folder”,
“nodeType”: “n8n-nodes-base.httpRequest”,
“nodeVersion”: 4.2,
“itemIndex”: 0,
“time”: “12/17/2025, 9:33:22 AM”,
“n8nVersion”: “1.50.1 (Self Hosted)”,
“binaryDataMode”: “default”,
“stackTrace”: [
“NodeApiError: The connection to the server was closed unexpectedly, perhaps it is offline. You can retry the request immediately or wait and retry later.”,
" at Object.execute (/usr/local/lib/node_modules/n8n/node_modules/n8n-nodes-base/dist/nodes/HttpRequest/V3/HttpRequestV3.node.js:1641:33)“,
" at processTicksAndRejections (node:internal/process/task_queues:95:5)”,
" at Workflow.runNode (/usr/local/lib/node_modules/n8n/node_modules/n8n-workflow/dist/Workflow.js:725:19)“,
" at /usr/local/lib/node_modules/n8n/node_modules/n8n-core/dist/WorkflowExecute.js:673:51”,
" at /usr/local/lib/node_modules/n8n/node_modules/n8n-core/dist/WorkflowExecute.js:1085:20"
]
}
}

Below is one representative HTTP node where this error occurred (similar errors happen in other HTTP nodes as well):

This node uploads a PDF file to a GCS bucket using the JSON API.

Snapshot of what it looks like:

n8n setup

  • n8n version: 1.50.1

  • Database: PostgreSQL (Cloud SQL)

  • N8N_EXECUTIONS_PROCESS: own (default)

  • Operating system: Linux (Google Cloud Run)

Hosting & networking setup

  • Hosting: Google Cloud Run

  • Region: us-central1

  • Running via: Docker

  • Serverless VPC Access: Enabled

  • VPC connector: run-to-sql-connector

  • All egress traffic routed through VPC

  • Ingress: All (public HTTPS endpoint)

Additional notes

  • No explicit proxy is configured in n8n

  • HTTPS is used for all HTTP requests

  • Retrying usually succeeds

  • Error appears unrelated to request payload size or specific API

Any guidance would be appreciated. Thanks in advance!

Reg.

Hi @devuser2

This is a common issue in serverless environments like Cloud Run when using a VPC Connector . It usually happens because the network closes idle TCP connections, but n8n/Node.js tries to reuse them.

Recommended Solutions:

  • Enable “Retry on Fail” (Best Fix): Since you mentioned retries usually succeed, the most effective solution is to go to the Settings tab of your HTTP Request node and enable Retry on Fail . Set it to 2 or 3 attempts with a 5-second interval.

  • VPC Connector Limits: Check if your VPC Connector is hitting its throughput limits. If it scales up or reaches max capacity, it can drop existing sockets, causing ECONNRESET.

  • Increase Node Timeout: Under the node’s Options , add the Timeout parameter and set it to a higher value (e.g., 60,000ms) to prevent the local side from closing the socket too early.

Because this is intermittent and happens across different services, it is likely a networking “blip” between the VPC and Cloud Run. Implementing the Retry logic is the standard way to make these workflows resilient.

If you found my answer helpful, please like and accept it.

2 Likes

Hi @Websensepro

I have added the retries and increased the node timeout setting. I’ve been monitoring for some days and it seems to be working fine for now. But I will still keep an eye on it and continue to monitor. But apart from that, the solution which you shared is the only possible option I guess, so I will accept that. Thank you for your reply!

Reg.

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.