S3 socket exhaustion

I’m posting here about an issue I already reported ( S3 binary data: socket pool exhaustion causes workflow hangs after N iterations · Issue #26968 · n8n-io/n8n · GitHub ), but I’m curious if other people have encountered it.

Describe the problem/error/question

The problem involves using AWS-S3 for binary storage. I am working with a client and have been developing a workflow that uses the Extract From CSV node, and stopping the read early with a max number of lines parameter. We encountered a bizarre bug where after 50 passes through a worker in queue mode, the worker would halt. No error, no crash, nothing.

After a lot of debugging (including putting n8n in a local Docker instance with minIO to replicate the S3 case), I discovered the issue has to do with S3 socket exhaustion. Several n8n nodes call destroy() on an S3 stream as a means of terminating it, but this is incorrect and leads to the socket not being properly closed and restored to the pool. If there is a max number of sockets set (the default is 50), this eventually leads to socket exhaustion where S3 calls simply hang, waiting for a socket to be available. This is exacerbated by the fact that no timeout is set for S3 operations in n8n, so the only solution is to restart the worker.

The socket exhaustion is a known issue in AWS ( S3 GetObjectCommand leaks sockets if the body is never read / add doc links for streaming responses re: socket exhaustion · Issue #6691 · aws/aws-sdk-js-v3 · GitHub ), and calling destroy() is considered incorrect usage, so it won’t be fixed there.

I am curious if other people have encountered this issue. It manifests as an inexplicable hang in an S3 operation.

(With help from Claude, I have a prototype solution involving wrapping S3 Readable streams with a PassThrough to intercept closure operations and handle them correctly, but I’m not an expert on the n8n codebase, so I have not submitted it.)

Hit the same pattern running high-frequency S3 reads in queue mode. Your diagnosis is correct: destroy() on the S3 Readable doesn’t release the socket back to the pool, and with the default maxSockets: 50 in the AWS SDK HTTP agent, it silently deadlocks. The PassThrough wrapper approach works. You can also bump maxSockets on the S3 client’s HTTP agent as a short-term workaround, or set requestTimeout on the S3 client config so at least the hang becomes a visible error instead of silent. Your GitHub issue is the right path for a proper fix though, since the destroy-vs-consume pattern needs to be addressed in n8n’s core S3 handling.

This is a known pain point and your diagnosis is spot on.

A few things that might help while waiting for a proper fix in n8n core:

1. Increase the max sockets limit - Not a fix, but buys you headroom. You can configure a custom Agent with a higher maxSockets value when initialising the S3 client. Pushes the exhaustion threshold further out.

2. Set a socket timeout - Since n8n doesn’t set one by default, hung sockets wait indefinitely. Adding a requestTimeout and connectionTimeout to your S3 client config means exhausted sockets eventually clean themselves up rather than hanging forever.

3. Your PassThrough wrapper approach - Honestly this sounds like the right fix. Wrapping the S3 Readable stream in a PassThrough to intercept destroy() and handle closure correctly is exactly how AWS recommends dealing with this. Worth submitting as a PR — the n8n team is pretty responsive to well-documented bug fixes.


On a separate note - if the underlying goal is just getting a hosted URL for files in your workflow, I built a verified community node called Upload to URL that sidesteps S3 entirely.

Binary in, public CDN URL out. Might be worth considering if the S3 complexity is more trouble than it’s worth for your use case.

Happy to help either way.

Great diagnosis. You’re absolutely right—the destroy() pattern on S3 Readables doesn’t return sockets to the pool. The PassThrough wrapper approach is solid, and honestly more future-proof than bumping maxSockets or requestTimeout.

If you do submit a PR, consider also looking at how n8n handles stream cleanup elsewhere in the codebase—might be a pattern worth fixing in multiple places.

The Socket.io approach (PassThrough intercepting destroy) is how the AWS SDK itself recommends handling this. You’ve got the fix before n8n does; worth documenting it in the GitHub issue.