Finish local setup on mac

Hi,
I am relatively new to n8n but I want to start on a local machine which is a mac (not my preferred OS) but this mac mini ist just for running the local ai. I tried to install docker and all components through the self-hosted-ai-package from github. But I do not have Olama in the docker and it cannot be reached along with downloaded llms in the n8n workflows. I cannot get it to work and I would pay somebody to help me install everything correctly so that I can start working on the workflows. Please let me know if you can help!

@rtissler self-hosted-ai-package comes with ollama integrated. if you still want some1 to install i can that for a certain fee.

That is the problem: olama will not install, it is not seen in the docker…Let me know if you can help also to talk to a local llm through http, also price per pn.

1 Like

@rtissler,

Things you need to check
1)

Requires macOS 11 Big Sur or later

  1. Follow this guide
    GitHub - n8n-io/self-hosted-ai-starter-kit: The Self-hosted AI Starter Kit is an open-source template that quickly sets up a local AI environment. Curated by n8n, it provides essential tools for creating secure, self-hosted AI workflows.

  2. Follow this guide

Highlighted points are to keep in mind.

  1. Once all steps are followed, it should look something like this Note: I am running on windows 11 docker

If you are still stuck, let me know.

Also post your Mac OS version and chip name.

yes, I did all that for mac users, however it ends up in this:

Olama is not included in the docker. Anybody an idea how to get an already installed local Ollama (separately) INSIDE the docker? It seems to be installed but not in the docker.

macOS Sequoia, V 15.3.1, Mac Mini M4, 32GB Ram

I got it, I did the wrong docker compose command … thx!

1 Like