HTTP Request with n8n Binary File Keeps Timed Out

Describe the problem/error/question

My HTTP Request that POST a file and a password is always timed out when sending a multipart form request to my FastAPI. I have tested the API endpoint with curl (curl -X POST http://localhost:8000/unlock-pdf -F "file=@$(pwd)/main.py" -F "password=yourpassword" -o unlocked.pdf) and it works and expected. However, when n8n sends that request, it timed out every time. I exported this workflow from my previous docker compose instance, but when I installed self hosted version on my homelab’s lxc container, it just fails. ChatGPT says “n8n is sending a paused stream as the file body, and FastAPI is waiting forever for the stream to finish. That’s why you get a timeout instead of an error.”

What is the error message (if any)?

Please share your workflow

Share the output returned by the last node

Information on your n8n setup

  • n8n version: 2.11.4
  • Database (default: SQLite): SQLite
  • n8n EXECUTIONS_PROCESS setting (default: own, main): own
  • Running n8n via (Docker, npm, n8n cloud, desktop app): npm
  • Operating system: Linux my-n8n-instance 6.17.13-1-pve #1 SMP PREEMPT_DYNAMIC PMX 6.17.13-1 (2026-02-10T14:06Z) x86_64 x86_64 x86_64 GNU/Linux
1 Like

Hi @Ryan_Teh Welcome!
If the file you are pulling can be big, i recommend adding this N8N_DEFAULT_BINARY_DATA_MODE=filesystem also in your HTTP as you probably are expecting a file go to HTTP request node → add option → response → set response format to file, and ofc you can always increase the timeout like 120000 so that it waits maybe 120sec like that is what you can do, let me know if this helped.

1 Like

Hi @Anshul_Namdev its a small file, about 200kb. I will give the N8N_DEFAULT_BINARY_DATA_MODE a try.

1 Like

understood give that a try also the HTTP response mode.

I tested N8N_DEFAULT_BINARY_DATA_MODE=filesystem, it’s still the same. Any suggestion on what else could I look at to figure out the issue?

I have tried adding a debug endpoint in my Fast API, the request hangs only when trying to read the form data. When the API does not read the form body, it works. Also, I must emphasize that the same API code worked previously when I am using n8n with docker compose.

the switch from docker compose to LXC with npm is key — on LXC localhost can resolve to ::1 (IPv6) while FastAPI binds on IPv4 only, which causes the TCP connection to open but the binary stream to stall waiting to flush. try replacing localhost with 127.0.0.1 in your URL first.

to confirm whether it’s the stream itself: add a quick /debug endpoint in FastAPI that does body = await request.body(); return {"size": len(body)} without touching form parsing. if that also hangs, n8n’s stream pipe isn’t closing properly on LXC. if it returns fine, the issue is starlette’s multipart parser waiting for stream EOF.

@Benjamin_Behrens The request URL uses IP address instead of localhost so we know it isn’t dns resolution problem.
This is my health endpoint’s code:

@app.middleware("http")
async def log_requests(request: Request, call_next):
    logger.info(f"Incoming request: {request.method} {request.url}")
    response = await call_next(request)
    logger.info(f"Response status: {response.status_code}")
    return response

@app.api_route("/health", methods=["GET", "POST"])
async def debug(request: Request):
    logger.info(f"Health check received: {request.method} {request.url}")
    content_type = request.headers.get("content-type")
    
    try:
        logger.info(f"Logging request body")
        body = await request.body()
        logger.info(f"Request body length: {len(body)}")
    except Exception as e:
        logger.error(f"Error reading request body: {e}")
        return JSONResponse(status_code=400, content={"detail": f"Error reading request body: {e}"})
        
    
    # Check if request is form data (only relevant for POST requests with form data)
    if content_type and "form" in content_type:
        try:
            logger.info(f"Logging form keys")
            form_data = await request.form()
            form_keys = list(form_data.keys())
            logger.info(f"Form keys: {form_keys}")
        except Exception as e:
            logger.error(f"Error reading form data: {e}")
            return JSONResponse(status_code=400, content={"detail": f"Error reading form data: {e}"})
    
    response = {
        "headers": dict(request.headers),
        "content_type": content_type
    }
    logger.info(f"Health check response: {response}")
    return response

These are the deployed docker container’s log:

INFO:     Started server process [1]
INFO:     Waiting for application startup.
INFO:     Application startup complete.
INFO:     Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
INFO:__main__:Incoming request: POST http://192.168.0.4:8000/health
INFO:__main__:Health check received: POST http://192.168.0.4:8000/health
INFO:__main__:Logging request body
ERROR:__main__:Error reading request body: 
INFO:__main__:Response status: 400

It hangs in the Logging request body line. So like you said, most likely n8n’s stream pipe isn’t closing properly on LXC. Are there any solutions for this?

Hi @Ryan_Teh

Could you share the Node Js version? Since you’re running n8n via npm, it uses whatever Node.js is on your system, unlike Docker, which bundles a tested version. A mismatched Node.js version can cause issues with how n8n streams binary data in HTTP requests, which would explain what you’re seeing.

there’s a documented pattern of n8n HTTP requests failing in Proxmox LXCs running Node 18, while the same requests work via curl. Since n8n now requires Node 20.19+, and many Proxmox LXC scripts install Node 18 by default, that mismatch might be cause.

ran into something similar with file uploads — n8n streams binary data, and if the receiving endpoint isnt properly reading from the stream, it times out waiting. couple things: make sure the backend is actually consuming the stream (not just holding it), try adding the Content-Length header manually if possible, and check if bumping the timeout helps (sometimes http node default is too aggressive). also worth testing if reducing file size helps narrow down if its a stream handling issue or just pure payload size. let me know what you find

@houda_ben I am running node v24.14.0

@Benjamin_Behrens , adding content length didn’t really helped. I dont think file size is the issue here, the file is only 200KB.

I created a Flask app with the same API endpoint to test it out these are the logs produced:

INFO:main:Incoming request: POST http://192.168.0.4:8000/unlock-pdfINFO:main:Start unlocking PDFERROR:main:Exception on /unlock-pdf [POST]Traceback (most recent call last):File “/app/venv/lib/python3.11/site-packages/werkzeug/serving.py”, line 110, in read_chunk_len_len = int(line.strip(), 16)^^^^^^^^^^^^^^^^^^^^^ValueError: invalid literal for int() with base 16: ‘’

The above exception was the direct cause of the following exception:

Traceback (most recent call last):File “/app/venv/lib/python3.11/site-packages/flask/app.py”, line 1511, in wsgi_appresponse = self.full_dispatch_request()^^^^^^^^^^^^^^^^^^^^^^^^^^^^File “/app/venv/lib/python3.11/site-packages/flask/app.py”, line 919, in full_dispatch_requestrv = self.handle_user_exception(e)^^^^^^^^^^^^^^^^^^^^^^^^^^^^^File “/app/venv/lib/python3.11/site-packages/flask/app.py”, line 917, in full_dispatch_requestrv = self.dispatch_request()^^^^^^^^^^^^^^^^^^^^^^^File “/app/venv/lib/python3.11/site-packages/flask/app.py”, line 902, in dispatch_requestreturn self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)  # type: ignore[no-any-return]^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^File “/app/./main.py”, line 47, in unlock_pdfupload = request.files.get(“file”)^^^^^^^^^^^^^File “/app/venv/lib/python3.11/site-packages/werkzeug/utils.py”, line 100, in getvalue = self.fget(obj)  # type: ignore^^^^^^^^^^^^^^File “/app/venv/lib/python3.11/site-packages/werkzeug/wrappers/request.py”, line 497, in filesself._load_form_data()File “/app/venv/lib/python3.11/site-packages/flask/wrappers.py”, line 198, in _load_form_datasuper()._load_form_data()File “/app/venv/lib/python3.11/site-packages/werkzeug/wrappers/request.py”, line 271, in _load_form_datadata = parser.parse(^^^^^^^^^^^^^File “/app/venv/lib/python3.11/site-packages/werkzeug/formparser.py”, line 242, in parsereturn parse_func(stream, mimetype, content_length, options)^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^File “/app/venv/lib/python3.11/site-packages/werkzeug/formparser.py”, line 267, in _parse_multipartform, files = parser.parse(stream, boundary, content_length)^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^File “/app/venv/lib/python3.11/site-packages/werkzeug/formparser.py”, line 368, in parsefor data in _chunk_iter(stream.read, self.buffer_size):File “/app/venv/lib/python3.11/site-packages/werkzeug/formparser.py”, line 423, in _chunk_iterdata = read(size)^^^^^^^^^^File “/app/venv/lib/python3.11/site-packages/werkzeug/serving.py”, line 123, in readintoself._len = self.read_chunk_len()^^^^^^^^^^^^^^^^^^^^^File “/app/venv/lib/python3.11/site-packages/werkzeug/serving.py”, line 112, in read_chunk_lenraise OSError(“Invalid chunk header”) from eOSError: Invalid chunk headerINFO:main:Response status: 500

Does this proves the API did not received an EOF from n8n?

I gave up on the npm version. Uninstalled it and used the docker version. The docker version worked as expected with the exact same API.

1 Like

@Ryan_Teh — the fact that the docker version works fine with identical code confirms this is the npm + LXC combo issue. the invalid chunk header error from Flask proves n8n’s stream wasn’t closing properly, exactly what we suspected. thanks for testing both and narrowing it down — this will help others running npm n8n on Proxmox/LXC realize they need the bundled docker version for file streaming to work reliably.

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.