HTTP Request with proxy returning "Socket Hang Up" sometimes, but only in N8N

I have an HTTP request node that returns “Error - Socket Hang Up” about 1/3 of the time it runs (it runs every minute). It’s using a (paid) proxy service.

I’ve checked the logs with the proxy provider, and they say these requests are being fulfilled without error

I read in the forums here a bit and tried sending the request (using the proxy) on my server with CURL, to see if it works that way, which it does. I’ve sent it over and over and over again with CURL to try and replicate the “Socket Hang Up” but it doesn’t ever produce an error or fail to complete. So the proxies appear to be working correctly, as reported by the proxy provider, and I only encounter the error within N8N.

It only errors some of the time, so I am confident the request and proxy is set up correctly in N8N otherwise it would error all the time.

The other odd thing is that I don’t get this error when doing the same HTTP request without the proxy. This points to it being a proxy problem, but with my testing above never erroring when using the proxy manually with CURL on the same server, I don’t think it is.

What else could I do to try and debug this?

What is the error message (if any)?

ERROR: socket hang up

Details

Time

3/20/2024, 12:00:28 PM

Item Index: 55

HTTP Code

rejected

Stack

NodeApiError: socket hang up
    at Object.execute (/usr/local/lib/node_modules/n8n/node_modules/n8n-nodes-base/dist/nodes/HttpRequest/V3/HttpRequestV3.node.js:1564:27)
    at processTicksAndRejections (node:internal/process/task_queues:95:5)
    at Workflow.runNode (/usr/local/lib/node_modules/n8n/node_modules/n8n-workflow/dist/Workflow.js:730:19)
    at /usr/local/lib/node_modules/n8n/node_modules/n8n-core/dist/WorkflowExecute.js:660:53
    at /usr/local/lib/node_modules/n8n/node_modules/n8n-core/dist/WorkflowExecute.js:1062:20

Please share your workflow

Share the output returned by the last node

The error posted above.

Information on your n8n setup

  • n8n version: 1.32.2
  • Database (default: SQLite): Postgresql
  • n8n EXECUTIONS_PROCESS setting (default: own, main): Default
  • Running n8n via (Docker, npm, n8n cloud, desktop app): Docker
  • Operating system: Linux

hello @mmac

I think the request node can’t receive the answer from the server in the timely manner (e.g. too long to wait for the answer). As the HTTP Request node sends the request and then it waits for response. The issue may be with the proxy itself. So it may be helpful to check for the ECONNRESET errors on the proxy server

I don’t think the response is taking too long as:

(1) When testing the request with the proxy manually with CURL on the same sever, the response is received in less than a second. It’s instant. I’ve run it 50+ times, just to be sure, trying to replicate the problem. I have never experienced anything other than an instant response.

(2) As I indicated, there are no such errors on the proxy server logs.

Hm… is the issue happens early or only after some amount of executions?

Technically, if you have a lot of open connections, you may have an issue with opening a new one. But I assume that there will be another error.

That stuff quite hard to reproduce

Sadly do not think you testing and your n8n usage is comparable. Just looking at your post it says it failed on item 55 (of how many I do not know). So it makes a lot of requests in parallel (as value not defined, it will be the default value of 50) and shortly after that (3 seconds later) starts with the next 50.

So the first 50 go through totally fine without a problem, then another 5 as well (as it is 0 indexed) and the 6th (or overall the 56th) fails then.

1 Like

Ok thank you, this is helpful. When I get the error “Socket Hang Up” it appeared to me that none of the requests were going through since that is the only output in the IU - it appears like everything failed from the very first one. I will set up a script to send the whole thing through with CURL every minute and see what happens.

You can also use timers to send requests not so fast, so there will be time to properly close the connection without overwhelming the network with something like this

But the time should be tuned in a way that the next execution won’t overlap with the current one

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.