Error in Ollama Embedding

Describe the problem/error/question

Hi,
I receiving fetch failed error with Ollama embedding. This questions has need asked before and there was a solution to use Ollama in docker instead of local one like in my case. but the issue that Ollama with docker will not utilize the MacBook GPU and will use the CPU.

Therefore I want a solution to use Ollama locally to benift from the GPUs

What is the error message (if any)?

fetch failed

Please share your workflow

Share the output returned by the last node

Information on your n8n setup

  • n8n version: 1.86.1
  • Database (default: SQLite): Supabase
  • n8n EXECUTIONS_PROCESS setting (default: own, main): main
  • Running n8n via (Docker, npm, n8n cloud, desktop app): Docker
  • Operating system: macOS Sequoia