Error with ollama embedding model

Describe the problem/error/question

I am currently having a problem with the embedding portion of my workflow. I am only getting a general ‘fetch failed’ error and no additional supporting details.

What is the error message (if any)?

Digging into the container logs I retrieved the following error message, but there is little clarity without digging into the code.

2024-10-29 18:42:18 fetch failed
2024-10-29 18:42:18 TypeError: fetch failed
2024-10-29 18:42:18 at node:internal/deps/undici/undici:13185:13
2024-10-29 18:42:18 at post (/usr/local/lib/node_modules/n8n/node_modules/ollama/dist/shared/ollama.9c897541.cjs:114:20)
2024-10-29 18:42:18 at Ollama.embed (/usr/local/lib/node_modules/n8n/node_modules/ollama/dist/shared/ollama.9c897541.cjs:396:22)

Please share your workflow

The workflow is as follows:

I am using the same embedding node in another segment of the workflow that interfaces with OpenWebUI to query the qdrant vector db and it returns with no error, though there is no content to return.

Additional notes:
I am currently embedding the content of 1984 by George Orwell which approximates to 585Kb in a text file.

Share the output returned by the last node

Information on your n8n setup

  • n8n version: 1.64.2 - Yes, I know it is out of date, but ever so slightly
  • qdrant is the db in question. The postgres connection is working flawlessly for chat history
  • File changes in a directory are detected, which triggers the above workflow.
  • Current execution framework is Docker Desktop, though once development is implemented the workflow will be migrated to an Unraid server, also executing via docker.
  • OS in question is Win 11, although bunkered within Docker.

Please advise to any additional information I can provide to assist with diagnosing this issue. I have been banging my head against the wall for days.

It looks like your topic is missing some important information. Could you provide the following if applicable.

  • n8n version:
  • Database (default: SQLite):
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app):
  • Operating system:

Hi @Spazmagi

Thanks for posting here and welcome to the community! :tada:

How did you configure your ollama credentials? Also, can you check that your ollama server is running in the same container as n8n?

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.