Resume pdf with Ollama and n8n

Good afternoon, how are you? Here’s the thing, I’m trying to create a workflow in n8n in which I send the pdf link through insonia and it returns a summary of the pdf but I’m having problems with ollama, any suggestions?

Version is the more recently and is localhost

Describe the problem/error/question

What is the error message (if any)?

Please share your workflow

(Select the nodes on your canvas and use the keyboard shortcuts CMD+C/CTRL+C and CMD+V/CTRL+V to copy and paste the workflow.)

Share the output returned by the last node

Information on your n8n setup

  • n8n version:
  • Database (default: SQLite):
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app):
  • Operating system:
1 Like

Hi @Duarte_Palha
if you’re on docker, swap http://localhost:11434 for http://host.docker.internal:11434 in your ollama node. this tells n8n to look at your host machine instead of inside its own shell.

sometimes localhost resolves to ::1 (ipv6) but ollama is only listening on 127.0.0.1 (ipv4). i usually just use http://127.0.0.1:11434 to be safe and avoid that “ECONNREFUSED” headache.

ollama defaults to only allowing local connections. might be worth setting OLLAMA_HOST=0.0.0.0 in your system environment variables and restarting ollama so it actually listens to the docker network.

your workflow looks okay but you’ll need to make sure that “code in javascript” node is actually passing the full text chunk to ollama. if the text is too huge, ollama might timing out or rejecting the request entirely.

Hi @Duarte_Palha Welcome!
Consider using Ollama build in node instead of HTTP node, and use AN AI agent for that, also setup the ollama node credentials with this url http://host.docker.internal:11434 , and after setting these up if you get any error related to ERRCONNECTIONREFUSED kind of you have yo use 127.0.0.1:11434 for that , also i recommend watching this video:

Also read this if you want more information:

Hey @Duarte_Palha, a few things that might help:

1. Ollama can’t read PDF binary data directly

The most common issue here is sending the raw PDF to Ollama. You need to extract the text first. If you’re using n8n v1.30+, there’s an Extract from File node that handles PDF text extraction natively — add it between your HTTP Request (that downloads the PDF) and the Ollama node.

2. Workflow structure that works

Webhook (receives PDF URL via Insomnia)
  → HTTP Request (downloads PDF, set Response Format to "File")
  → Extract from File (extracts text from PDF binary)
  → Basic LLM Chain + Ollama Chat Model (summarizes)
  → Respond to Webhook (returns result)

3. Use the built-in Ollama nodes instead of raw HTTP

Instead of calling http://localhost:11434/api/generate manually, use the Ollama Chat Model node (under AI > Language Models). It handles streaming, timeouts, and response parsing automatically. Connect it as a sub-node to a Basic LLM Chain node.

For the prompt in the Basic LLM Chain:

Summarize this resume. List key skills, years of experience, and education:

{{ $json.text }}

4. Docker networking

If n8n runs in Docker and Ollama on the host, make sure you started n8n with --add-host=host.docker.internal:host-gateway and set the Ollama credentials URL to http://host.docker.internal:11434 instead of localhost.

5. Timeout settings

Ollama on CPU can be slow for the first request (model loading). If you get timeout errors, go to the Ollama Chat Model node settings and increase the timeout to 120 seconds.

What specific error are you seeing? The screenshot shows “The service refused the connection” which usually means either Ollama isn’t running or the URL/port is wrong from inside Docker.