Fetch Failed Error When Using Ollama Chat Model on Localhost (n8n via npm)

I’m trying to connect n8n (installed via npm) with my local Ollama server running at 127.0.0.1:11434, but I keep getting this error when using the Ollama Chat Model node:

Error: Fetch failed

My setup:

  • n8n installation: npm (running locally)

  • Ollama server: running locally on 127.0.0.1:11434

  • Ollama model: e.g. deepseek-r1:7b

  • Operating system: Windows 11

  • Node.js version: v22.19.0

  • n8n version: Version 1.112.5(self-hosted)

    Questions:
    
    Is there any specific configuration needed when using Ollama locally with n8n installed via npm?
    
    Should I explicitly set the Ollama base URL somewhere in n8n’s environment variables?
    
    Are there any known limitations or fixes for the Fetch failed error in this setup?
    

Did you create credentials for it?

i just create an ollama account and i added my api key, but its still not working, still getting the fetch failed error am using ollama server from windows

orking

I don use an api key since i am on localhost both ollama and n8n.

I’ve tried running only my Ollama server and n8n together, but it’s still not working.

After several debugging attempts, I suspect this might be a connection issue.

Could it be that Ollama can’t connect to the server address shown in my terminal when I start n8n?

How did you start or downloaded the model?
ollama serve - should start
ollama pull <mode name here) (eg: qwen1.7.3b) etc…

Hello there! Nice to meet you. Im facing the same problem as yours. Did you find any solution yet?

Hello there! I have the same setup as yours, however Im running into the same issue of n8n pulling an error “fetch failed” while running the Ollama chat model (through the Basic LLM chain Node). Did you face the same issue? If not, could you help me get rid of this error please? Your response will be much appreciated.

I am still having same issues, am just trying to use other open source LLMs while looking for possible solutions… but if you find your way around it please put me through

Hey @Aptech and @Ayaan_Rahman !

First of all, how do you start ollama on your localhost?( I do it simply ollama run MY_MODEL )

Then I don’t use any API key since I use it locally.

And in some cases you can use

127.0.0.1:11434 instead localhost:11434 .

Cheers!