Issue: [ERROR: model "illama3.2:latest" not found, try pulling it first]

Describe the problem/error/question

We followed the GitHub for the starter kit—https://github.com/n8n-io/self-hosted-ai-starter-kit—and managed to run it on our localhost. We created an n8n local account and enabled the demo workflow. We tested this workflow and found the error message.

What is the error message (if any)?

[ERROR: model “illama3.2:latest” not found, try pulling it first]

Please share your workflow

(Select the nodes on your canvas and use the keyboard shortcuts CMD+C/CTRL+C and CMD+V/CTRL+V to copy and paste the workflow.)

{
  "meta": {
    "instanceId": "558d88703fb65b2d0e44613bc35916258b0f0bf983c5d4730c00c424b77ca36a"
  },
  "nodes": [
    {
      "parameters": {
        "model": "=illama3.2:latest",
        "options": {}
      },
      "id": "3dee878b-d748-4829-ac0a-cfd6705d31e5",
      "name": "Ollama Chat Model",
      "type": "@n8n/n8n-nodes-langchain.lmChatOllama",
      "typeVersion": 1,
      "position": [
        900,
        560
      ],
      "credentials": {
        "ollamaApi": {
          "id": "xHuYe0MDGOs9IpBW",
          "name": "Local Ollama service"
        }
      }
    }
  ],
  "connections": {},
  "pinData": {}
}

## Share the output returned by the last node

NA

## Information on your n8n setup
- n8n version: 1.65.2
- Database (default: SQLite): NA
- n8n EXECUTIONS_PROCESS setting (default: own, main): NA
- Running n8n via (Docker, npm, n8n cloud, desktop app): Docker
- Operating system: Win 11

Maybe the download of the model has not finished yet (takes a couple of minutes) or got aborted.
Could you check within the ollama container (enter with docker exec) if there are any models available?
Command: ollama list

Otherwise it could be, that the base URL within the n8n Node is not pointing to the container but your host instead.

Hi @octionic,

Thanks for this.

In the yaml, I added extra for the model: nomic-embed-text:latest

I did check the ollama contain (enter with docker exec ) and ran Command: ollama list

I can only find the above model, NOT the other one.

How do I get this model, then?

Thanks

Then most probably the initial pull failed.
You can manually install the model inside the container using this command: ollama pull llama3.2:latest

By the way, your error indicates that there is a typo. You specified illama instead of ollama. Maybe changing that should fix the initial problem.

2 Likes

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.