I’m trying to connect n8n (installed via npm) with my local Ollama server running at 127.0.0.1:11434, but I keep getting this error when using the Ollama Chat Model node:
Error: Fetch failed
My setup:
n8n installation: npm (running locally)
Ollama server: running locally on 127.0.0.1:11434
Ollama model: e.g. deepseek-r1:7b
Operating system: Windows 11
Node.js version: v22.19.0
n8n version: Version 1.112.5(self-hosted)
Questions:
Is there any specific configuration needed when using Ollama locally with n8n installed via npm?
Should I explicitly set the Ollama base URL somewhere in n8n’s environment variables?
Are there any known limitations or fixes for the Fetch failed error in this setup?
i just create an ollama account and i added my api key, but its still not working, still getting the fetch failed error am using ollama server from windows
Hello there! I have the same setup as yours, however Im running into the same issue of n8n pulling an error “fetch failed” while running the Ollama chat model (through the Basic LLM chain Node). Did you face the same issue? If not, could you help me get rid of this error please? Your response will be much appreciated.
I am still having same issues, am just trying to use other open source LLMs while looking for possible solutions… but if you find your way around it please put me through