Issue with Configuring Ollama in n8n Self-hosted AI Starter Kit

Hi everyone,
I’m having trouble setting up Ollama as part of the n8n Self-hosted AI Starter Kit. Despite following the provided instructions, Ollama does not seem to work as expected. Here’s a detailed description of the problem:

Problem Description:
1. After running the Docker containers for the starter kit, the Ollama container is up and running, but I cannot pull any models (e.g., Llama) using ollama pull llama.
2. The error message returned is:

pulling manifest
Error: pull model manifest: file does not exist

3.	Running curl -X GET http://localhost:11434/models returns a 404 page not found error.
4.	ollama list inside the container shows no models available.
5.	I’ve already confirmed that Docker is working properly and other containers in the starter kit (n8n, Qdrant, PostgreSQL) are functioning without issues.

Steps Taken to Troubleshoot:
• Verified network connectivity from inside the Ollama container using curl (successfully reached https://ollama.com).
• Re-pulled the ollama/ollama:latest Docker image and recreated the container.
• Ran the docker compose --profile cpu up command to ensure the kit is running in CPU mode as I’m using a Mac with Apple Silicon.
• Inspected the logs of the Ollama container, which show the service is running but does not load any models.

Expected Behavior:

I expected Ollama to allow me to pull models (e.g., Llama) for local inference without requiring additional configuration.

Current Behavior:

The Ollama service is running but does not provide access to models or allow pulling models.

Additional Information:
n8n version: 1.70.4
Database (default: SQLite): PostgreSQL (part of the starter kit)
n8n EXECUTIONS_PROCESS setting: Default (own)
Running n8n via: Docker (using the Self-hosted AI Starter Kit)
• Operating system: macOS Ventura, M1 processor

Questions:
• Is there additional configuration needed for Ollama to pull models in this setup?
• Are there compatibility issues with Apple Silicon or macOS in CPU mode?
• Could this be a network or configuration issue specific to the Docker image?

Thanks in advance for your help! Let me know if I need to provide any additional information.

It looks like your topic is missing some important information. Could you provide the following if applicable.

  • n8n version:
  • Database (default: SQLite):
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app):
  • Operating system:

Hi,
I’ve updated the missing info, hope it helps.

Hi, let me bump this thread. I really care about finding a solution, but I can’t figure it out myself.

Welcome to the community @AndrzejGolos !

Tip for sharing information

Pasting your n8n workflow


Ensure to copy your n8n workflow and paste it in the code block, that is in between the pairs of triple backticks, which also could be achieved by clicking </> (preformatted text) in the editor and pasting in your workflow.

```
<your workflow>
```

That implies to any JSON output you would like to share with us.

Make sure that you have removed any sensitive information from your workflow and include dummy or pinned data with it!


The easiest way to pull the model is from the Docker’s Exec tab in the correspinding container.

Once pulled this way the node will have the model listed.

Thanks @ihortom !

Hi everyone,

I managed to solve this issue and wanted to share the solution for others who might encounter the same problem.

The root cause was that the starter kit requires explicitly specifying a profile (CPU or GPU) when starting the containers, but I was initially running it without any profile specified.

Here’s how to fix it:

  1. First, stop and remove all existing containers:
docker-compose down
docker rm $(docker ps -a -q)
2. Then, launch the starter kit with the CPU profile explicitly specified:
docker-compose --profile cpu up -d
3. After doing this, all services including Ollama started correctly. You can verify it's working by testing:
curl -X POST http://localhost:11434/api/generate -d '{"model": "llama3.2", "prompt":"Hello!"}'
Hope this helps anyone else encountering similar issues!
3 Likes

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.