Ollama embedding does not accept the model, but using it with http request works

First, closely related to Error with ollama embedding model but a bit different

Describe the problem/error/question

When I set up the ollama embedding node, it will not “accept” the model I have loaded for embedding, but when I generate the query using http request node it work and returns a good reply with the vectors.
Update for some reason it suddenly accepted the model, so I tested, but still not working.

What is the error message (if any)?

Error in sub-node ‘Embeddings Ollama1‘

model " nomic-embed-text:latest" not found, try pulling it first

Please share your workflow

Not much of a workflow yet. Building out my moving parts like queries before going to more complex stuff. Just the isolated ollama embed node failing and the http request node working.

and
http://host.docker.internal:11434/api/embedding
Headers:

{
  "Content-Type": "application/json",
  "Accept": "*/*",
  "User-Agent": "python-requests/2.32.3"
}

Body (JSON):

{
  "model": "nomic-embed-text:latest",
  "input": "This is the text I want to embed"
}

Share the output returned by the last node

The node won’t accept the model, so cannot even test

Information on your n8n setup

  • n8n version: 1.70.3
  • Database (default: SQLite): Postgres
  • n8n EXECUTIONS_PROCESS setting (default: own, main): No idea
  • Running n8n via (Docker, npm, n8n cloud, desktop app): Docker
  • Operating system: Win11 host, Ubuntu WSL to create containers

Welcome to the community @lamachine !

Tip for sharing information

Pasting your n8n workflow


Ensure to copy your n8n workflow and paste it in the code block, that is in between the pairs of triple backticks, which also could be achieved by clicking </> (preformatted text) in the editor and pasting in your workflow.

```
<your workflow>
```

That implies to any JSON output you would like to share with us.

Make sure that you have removed any sensitive information from your workflow and include dummy or pinned data with it!


See if you can follow this guide showing how the model could be easily added, https://www.youtube.com/watch?v=XQ7wNqbB1x8 (you can start from the 8th minute). This is a genaral tutorial on using LLM with n8n and could answer other questions you might have.

Son of a gun, that was what I needed. There was never a drop down list. I added a couple more models, then clicked back to my flow then into my ollama embedding, and there it was. Thank you so much.

1 Like

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.