Cannot fetch ollama3.2 model from docker

Describe the problem/error/question

I have n8n running in a docker container and i have in docker 2 versions of llama llm.

1st llm model i have installed in this way:

first i pull llama image by running command: docker pull ollama/ollama:latest

then i run the docker container in cpu mode using command: docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama

then i run the model using this command: docker exec -it ollama ollama run llama3

this model runs well, in n8n once i define the connection by using base url: http://host.docker.internal:11434 i can see the green message: Connection tested successfully, when i check in the browser using this address: “http: // localhost:11434/v1/models” (yes i added 2 spaces here because i cannot insert here more than 5 links in this post) i get the following json in browser: {“object”:“list”,“data”:[{“id”:“llama3:latest”,“object”:“model”,“created”:1760460367,“owned_by”:“library”}]}

when i add in n8n the AI agent and i want to connect to this model it works fine, when i use this connection i can see the model in the dropdown list and I can use it in my n8n workflow

2nd llm model i have installed in this way:

In my docker i have enabled the new feature to run llm model:

and then on Models section, From docker hub i installed: llama3.2

the model works fine in docker i can interact with it:

when i check in the browser using this address: http://localhost:12434/engines/llama.cpp/v1/models i get the json response: {“object”:“list”,“data”:[{“id”:“ai/llama3.2:latest”,“object”:“model”,“created”:1742916473,“owned_by”:“docker”}]}

all good until here. Now I make my move inside n8n. I define a new credential(connection), i Used following base urls: http://host.docker.internal:12434/engines/llama.cpp/v1, http://host.docker.internal:12434 or http://model-runner.docker.internal/ , even if receive the message: Connection tested successfully,

when i use this connection into the chat model i cannot see the model in the dropdown list, and hence i can’t call it from my n8n workflow

Am I missing something ?

What is the error message (if any)?

Information on your n8n setup

  • n8n version:
    1.113.3 (Self Hosted)
  • Database (default: SQLite):
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app): docker desktop
  • Operating system:
  • windows 11

Found the way how to use it, instead of ollama chat model it has to be used an open ai chat model shape. Then in the credentials the urls has to be: http://model-runner.docker.internal/engines/llama.cpp/v1

Then is able to detect the chat model and interact with it.

1 Like

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.