Dockerized n8n stalls when interacting with the savatar101/marker-api:0.3 API, despite Docker supposedly completing the task

“I’m leveraging the savatar101/marker-api:0.3 to transform PDFs into Markdown format. Deployed in a Docker container, the API handles single 10-page PDFs effectively. But when processing 10 such PDFs concurrently, it fails with the following error:”
The connection was aborted, perhaps the server is offline [item 0]

Error details

From HTTP Request

Error code

ECONNABORTED

Full message

timeout of 300000ms exceeded

Request

{ “headers”: { “accept”: “application/json,text/html,application/xhtml+xml,application/xml,text/;q=0.9, image/;q=0.8, /;q=0.7”, “content-type”: “multipart/form-data; boundary=--------------------------823825799420664056145212” }, “method”: “POST”, “uri”: “http://host.docker.internal:8000/convert”, “gzip”: true, “rejectUnauthorized”: true, “followRedirect”: true, “resolveWithFullResponse”: true, “followAllRedirects”: true, “timeout”: 300000, “formData”: { “_overheadLength”: 315, “_valueLength”: 234315, “_valuesToMeasure”: , “writable”: false, “readable”: true, “dataSize”: 0, “maxDataSize”: 2097152, “pauseStreams”: true, “_released”: true, “_streams”: , “_currentStream”: null, “_insideLoop”: false, “_pendingNext”: false, “_boundary”: “--------------------------823825799420664056145212”, “_events”: {}, “_eventsCount”: 3 }, “encoding”: null, “json”: false, “useStream”: true }

Despite n8n being stopped, the Docker server continues to process files, resulting in 80% CPU utilization and 35GB RAM consumption out of 64GB."

I appreciate any help you can give
Information on your n8n setup

  • n8n version: 1.74.2
  • **Database (default: SQLite)
  • **n8n EXECUTIONS_PROCESS setting (default: own, main):**default/I don’t know
  • **Running n8n via Docker
  • Operating system: windows 11 24h2

It looks like your topic is missing some important information. Could you provide the following if applicable.

  • n8n version:
  • Database (default: SQLite):
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app):
  • Operating system:

“Despite n8n being stopped, the Docker server continues to process files, resulting in 80% CPU utilization and 35GB RAM consumption out of 64GB.”

I appreciate any help you can give
Information on your n8n setup

  • n8n version: 1.74.2
  • **Database (default: SQLite)
  • **n8n EXECUTIONS_PROCESS setting (default: own, main):**default/I don’t know
  • **Running n8n via Docker
  • Operating system: windows 11 24h2

Hey @Alessio_Lanzillotta , it is expected to get a response within 5 minutes. If the service you contacted does not do that within this timeframe the HTTP-based nodes (where Axios is involved for the transport) will time out.

2 Likes

thanks! now timeout is 3.000.000 and it works :wink:

It looks like this is a combination of timeouts and resource exhaustion in Docker. Since a single 10-page PDF works fine but 10 at once causes failure, the API is likely struggling with concurrency.

A few things to check:

Batch processing – Instead of sending all 10 PDFs at once, try processing 2-3 at a time. If the API isn’t built for high concurrency, this should prevent overload.

Check running Docker containers – Even if n8n is stopped, the API container might still be active, causing high CPU/RAM usage. Run:

docker ps -a

If it’s still running, stop it:

docker stop <container_id>
docker rm <container_id>

Lower Docker resource limits – 80% CPU and 35GB RAM suggests over-allocation. In Docker Desktop → Settings → Resources, reduce CPU/RAM to prevent excessive load.

Increase timeout (if needed) – The error shows timeout of 300000ms exceeded, meaning the request took longer than 5 minutes. If the API needs more time, increase the timeout, but ideally, optimize processing instead.

Check API logs for bottlenecks – Run:

docker logs <container_id>

This should show if the API is choking on large requests, running out of memory, or hitting concurrency limits.

Change n8n execution mode – Right now, it’s using the default (own, main). If workflows are getting stuck, try setting:

EXECUTIONS_PROCESS=own

This runs each workflow in a separate process, preventing n8n from blocking resources.

Let me know what you find.