Ollama models not showing up

I’m following the self hosted startup here: GitHub - n8n-io/self-hosted-ai-starter-kit: The Self-hosted AI Starter Kit is an open-source template that quickly sets up a local AI environment. Curated by n8n, it provides essential tools for creating secure, self-hosted AI workflows.

When I add an Ollama Chat Model node it appears to be able to successfully connect to the ollama instance. However, it only displays one model in the drop down. Though I have several models installed.

The call to http://localhost:5678/rest/dynamic-node-parameters/options returns:

{"data":[{"name":"llama3.2:latest","value":"llama3.2:latest"}]}

Though the call to the ollama API returns three models: http://localhost:11434/api/tags

{
    "models": [
        {
            "name": "llama3.2:latest",
            "model": "llama3.2:latest",
            "modified_at": "2024-11-29T04:42:09.592648665Z",
            "size": 2019393189,
            "digest": "a80c4f17acd55265feec403c7aef86be0c25983ab279d83f3bcd3abbcb5b8b72",
            "details": {
                "parent_model": "",
                "format": "gguf",
                "family": "llama",
                "families": [
                    "llama"
                ],
                "parameter_size": "3.2B",
                "quantization_level": "Q4_K_M"
            }
        },
        {
            "name": "nomic-embed-text:latest",
            "model": "nomic-embed-text:latest",
            "modified_at": "2024-11-29T04:42:09.972708857Z",
            "size": 274302450,
            "digest": "0a109f422b47e3a30ba2b10eca18548e944e8a23073ee3f3e947efcf3c45e59f",
            "details": {
                "parent_model": "",
                "format": "gguf",
                "family": "nomic-bert",
                "families": [
                    "nomic-bert"
                ],
                "parameter_size": "137M",
                "quantization_level": "F16"
            }
        },
        {
            "name": "mistral:latest",
            "model": "mistral:latest",
            "modified_at": "2024-11-28T23:51:11.525340501Z",
            "size": 4113301824,
            "digest": "f974a74358d62a017b37c6f424fcdf2744ca02926c4f952513ddf474b2fa5091",
            "details": {
                "parent_model": "",
                "format": "gguf",
                "family": "llama",
                "families": [
                    "llama"
                ],
                "parameter_size": "7.2B",
                "quantization_level": "Q4_0"
            }
        }
    ]
}

How do I get these other models to show up?

No error is displayed.

Please share your workflow

Share the output returned by the last node

Information on your n8n setup

  • 1.69.2
  • Postgres
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Docker
  • Windows 11 Host

It looks like your topic is missing some important information. Could you provide the following if applicable.

  • n8n version:
  • Database (default: SQLite):
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app):
  • Operating system:

Welcome to the community @czielin !

Tip for sharing information

Pasting your n8n workflow


Ensure to copy your n8n workflow and paste it in the code block, that is in between the pairs of triple backticks, which also could be achieved by clicking </> (preformatted text) in the editor and pasting in your workflow.

```
<your workflow>
```

That implies to any JSON output you would like to share with us.

Make sure that you have removed any sensitive information from your workflow and include dummy or pinned data with it!


Here’s how I installed additional Ollama models postfactum:

  1. Locate the model you want to use via Ollama
  2. Get the model name
  3. If using Docker UI, enter Ollama container and click on Exec tab
  4. Run the command ollama pull <model_name>

Once downloaded it will be available in Ollama model.

Thanks. This is exactly what I did. The addition of the models appears to have been successful (as seen in the ollama API call included in my original post). However, I do not see the additional models in the n8n UI.

I’ve tried the “refresh list” option under the three dot menu. I’ve tried deleting and readding the credentials. I’ve tried creating an entire new workflow. No success.

Sorry, it should’ve been the Embeddings Ollama node I shared. From the docs I was expecting the nomic-embed-text:latest model to show up as an option in the Model drop down here.

I got an automated message asking me to reply here if the problem isn’t solved. It is not. I haven’t found a solution for this yet.

Not sure what you are doing wrong, This method works for me just fine. Here’s another inspiration for you showing you how it’s done step by step, https://youtu.be/XQ7wNqbB1x8.

You can start watching from ~8th minute where this specific technique is shown.

Thanks @ihortom.

The method outlined in the video is exactly what I did. I even tried it on two different host machines.

As you can see in my previous post the additional model is installed and Ollama is returning it in the API call that n8n appears to be using. However, the n8n UI is never updated.

I see that the docker image was recently updated. Is it possible this is a bug that has since been fixed? Is it worth spending the time to wipe out my install and start fresh?