Docker Desktop setup / issues with AI self starter kit

Hi there,
I’ve been interested in N8N for a few months now and I finally decided to take some courses and started to setup everything yesterday. Since I’m a complete neophyte I thought that the best would be to run everything locally (self-hosted) before trying out other alternatives.
However, I ran into a few issues with Docker and I don’t know how exactly to solve them. Not sure if you’re able to give me some tips, but I’d appreciate some help if possible.
So, I installed Docker desktop and all went fine with the initial setup.
I came then along with this youtube link from AI Workshop (https://www.youtube.com/watch?v=PB24nnMBHlc) which is a tutorial on how to install the AI self starter kit as documented in Github (GitHub - n8n-io/self-hosted-ai-starter-kit: The Self-hosted AI Starter Kit is an open-source template that quickly sets up a local AI environment. Curated by n8n, it provides essential tools for creating secure, self-hosted AI workflows.)
Seemed pretty straight forward, and it looked like everything was going fine once I typed in the last command in Powershell : docker compose --profile cpu up
It pulled in the required stuff, but then the error messages appeared.

Describe the problem/error/question

What is the error message (if any)?

  • Operating system:

Hey @K3nshiro hope all is well. Welcome to the community.

Not sure if your post cut short and you were planning on adding more info, but if not, can you please share what errors you got when running docker compose --profile cpu up command?

1 Like

Hi Jabbson, I have had issues with a 403 error when trying to post and edit, not sure what went wrong.

@jabbson I’m unable to edit my initial post. I keep getting 403 error. Not sure then, why the system allowed me to post, it had created 3 drafts and I posted by error the wrong one…

Feel free to continue what you were about to add to the initial post in the comment if you can.

@jabbson It seems that it’s an port issue : port 5678 already allocated.

The pull gracefully stops at 32%. Com.docker.backend.exe and wslrelay.exe use port 5678.
This is the error : Error response from daemon: failed to set up container networking: driver failed programming external connectivity on endpoint n8n (baf7ab7087dc1b4811f9d9379d0db45fc62f938212977f61d2e65d5f6931c64f): Bind for 0.0.0.0:5678 failed: port is already allocated

if the port 5678 is already allocated this means that something is running and exposing this port.

Show the output of your attempt to run the docker compose infra by executing docker compose --profile cpu up

Run docker ps -a to see what is running in docker already or show the docker container which are running in Docker Desktop.

Try to down your infrastructure and up it again

docker compose --profile cpu down
docker compose --profile cpu up

1 Like

Output of docker compose - profile cpu up

Screenie in powershell after docker ps -a

Ok it looks like you have n8n already running

unless you are looking to run two n8ns, just go to your browser and type http://localhost:5678 or http://127.0.0.1:5678 and share what happens.

Uhhh…now that you say about running two instances…
Maybe I did things wrong. After Docker Desktop install I always went to images and pulled first n8nio/n8n, Once done, I git cloned the starter kit. I see now, there’s no need to pull it from Docker first, the ai starter kit does all…omg…such a noob. Will uninstall all and do a final (hopefully) clean install.

One question, now that I have you here…on this link (GitHub - nerding-io/n8n-nodes-mcp: n8n custom node for MCP) it´s explained how to setup MCP. This needs to be added to the docker-compose yaml file :

Enable community nodes as tools

  - N8N_COMMUNITY_PACKAGES_ALLOW_TOOL_USAGE=true
ports:
  - "5678:5678"
volumes:
  - ~/.n8n:/home/node/.n8n

The last line, can it be this as well ? Not sure if you would know though :wink:

  • ./n8n-data:/home/node/.n8n

n8n-data is the folder I have git cloned the starter kit into.

Cheers,
Ken

The last line is something you should already have in your docker compose file from the starter kit. Although it is probably using the named volume, which it really should. No need to change that, so all you really are looking for from this excerpt is the environment variable to be set.

1 Like

Hi @jabbson
Got some Updates:

  1. Did a fresh install with starter kit only, all seems fine :ok_hand:. By the end of the Powershell instance though, I see this, regarding some failed Ollama parts, do I need to worry ? :

  2. I also notice that the Powershell instance doesn’t implicitly say at the end , accesible at http://localhost:5678, I see though 3 options : V / O / W

Now, to run all this, I need to click “Run” on the images, correct ?

  1. I can see that it still says : - N8N_RUNNERS_ENABLED → Running n8n without task runners is deprecated. Task runners will be turned on by default in a future version. Please set N8N_RUNNERS_ENABLED=true to enable task runners now and avoid potential issues in the future. Learn more: Task runners | n8n Docs. Do I need to address this?

  2. Last but not least, should I have more initial questions or problems that may pop-up, can I continue in this thread or should I open a specific post for this ?

Thanks for all your help mate,
Ken

I wouldn’t worry, I think ollama just had a little hiccup pulling the models.

but does it work?

I would just start the whole thing from the command line with the command which is mentioned in the instructions

What you are looking for in the containers to see running ones for each image in the images (if they are a part of your kit)

I probably would, for this you need to add the mentioned env variable for n8n service to the docker-compose file.

environment:
    ...
    - N8N_RUNNERS_ENABLED=true
    ...

I always suggest creating a new topic for each distinct question, this allows other users to easily find and benefit from already provided answers.

You are always welcome. Feel free to mark any of the answers as solved if this was helpful.

Cheers!

2 Likes

Thankx mate, one final question…

When trying to Run the N8N image itself I get the “port is already allocated message”. So, it does not run as it already is running in the container, yes ?
Sorry for all my noobish questions, but I am completely new to this container/volume/image things…

correct, is you are getting this error, that means there is already a running container, which “occupies” this port. Which is what you can see on the second screenshot (a green dot next to the n8n).

Don’t be, we all have started somewhere, and I am sure you will learn the ropes in no time! I am here to help, so any question is a good question.

1 Like

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.