Hello Everyone,
i’ve been experimenting with Ollama and love it.
Ive moved onto Open-WedUI, and love it.
Ive tried numerous managers, and started using n8n, and love it.
So obviously, ive been trying to tie it all together… and keep failing…
Then i discover AI Starter Kit !!! im so excited !!!
The blog sounds great - and it says go over to the GitHub Repo
The github looks fantastic - BUT it suddenly mentions If you havent used nvidia GPU follow Ollama instructions…ok…off i go to another tab…
These instructions start with bla bla nvidia toolkit… so i go there…another tab…
And the nvidia toolkit is pages of linux !!!
SO i cannot continue.
Can some one PLEASE correct the instructions for us newbies !
I tried ignoring the Ollama Docker Instructions, and just git clone…bla…gpu-nvidia up.
It seems to have worked
There were NO warnings, error messages, or explosions. Only a warning about permissions, and a deprecated command (runners).
Oh the “editor is available via” is blank (but localhost:5678 is working)
i’m on windows 10. using Docker Desktop.
Woooo. What on earth…!!! Where are all my other images and containers !!! Has this starter kit done something? my other projects are all missing ???
DONT PANIC but i am…
1 Like
Which os you running windows? You can use WSL if you have a gpu and dockers can access it once setup.
Ah you mentioned windows 10, so yeah are you using WSL and do you have a gpu?
btw, you dont need a gpu for ollama but it helps
You can run Ollama on CPUs with integrated AI chips for smaller models, but don’t expect high performance or real-time interaction. For best experience, especially with 7B+ models:
Use a GPU (NVIDIA/AMD, 8GB+ VRAM)
Or, run small quantized models with a lot of RAM (32GB+)
Am using mac m1, with Apple Neural Engine (ANE), so I find on my mac it works okay, but I use windows with a gpu rtx 3060, and n8n can use it as a llm,
Do you have ollama running atm?
Hope this helps,
Samuel
Also you mentioned here
Woooo. What on earth…!!! Where are all my other images and containers !!! Has this starter kit done something? my other projects are all missing ???
DONT PANIC but i am…
Did you have other containers running? have they all gone?
What link / guide was you following… sometimes a command can wipe it
Best regards,
Samuel
Thank you very much for the reply.
The disappearing images was totally my fault
i’d made a mistake.
Yes, i have an nvidia gpu - is there a way to test if its all working ?
i never did “toolkit” - but it all seems ok. n8n is working. no error messages in powershell/logs.
Also: initial setup, using docker compose --profile gpu-nvidia up
doesnt finish with “editor now accessible…”
it ends with pulling manifest, postgres-1 checkpoint complete…
and just sits there, doing nothing.
i put a request into github about this.
1 Like
Okay, can u share screenshot, or link or the github issue, can take further look into it if you like.
Best regards,
Samuel
i still cant get anything working properly.
i eventually found these instructions for nvidia…
go here https://developer.nvidia.com/cuda/wsl → scroll down to Cuda Toolkit → choose Windows, x86, v10, local (or your choice) → download, run as admin, reboot.
start - run - powershell as administrator >nvcc -V this will give version info if its installed.
This worked, and i get
PS C:\Project25\self-hosted-ai-starter-kit> nvcc -V
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2025 NVIDIA Corporation
Built on Wed_Apr__9_19:29:17_Pacific_Daylight_Time_2025
Cuda compilation tools, release 12.9, V12.9.41
Build cuda_12.9.r12.9/compiler.35813241_0
But now when i run
docker compose --profile gpu-nvidia up
or the update…
docker compose --profile gpu-nvidia pull
docker compose create; docker compose --profile gpu-nvidia up
it fails…
n8n-import | Importing 1 workflows…
n8n-import | Successfully imported 1 workflow.
n8n-import exited with code 0
Gracefully stopping… (press Ctrl+C again to force)
Error response from daemon: failed to set up container networking: network 3e1a…88 not found
Please help.
update -------------------------
this works
ps- docker compose --profile gpu-nvidia up
but this gives error, failed to set up container networking:
ps- docker compose create; docker compose --profile gpu-nvidia up
after spending MANY HOURS investigating, reading, asking about this nvidia toolkit, i am still confused. Everywhere i looked the instructions are incomplete, confusing, and just plain wrong. i did eventually find this:
go here https://developer.nvidia.com/cuda/wsl → scroll down to Cuda Toolkit → choose Windows, x86, v10, local (or your choice) → download, run as admin, reboot.
start - run - powershell as administrator >nvcc -V this will give version info if its installed.
which appears to work as i have version numbers appear.
however, the install/setup still have the missing link to localhost:5678
And the update fails with networking error (previous picture)
Please advise.
1 Like
now its all broken.
docker compose --profile gpu-nvidia up
and error “container networking”
what do i do, please?
1 Like
@2morowMan
I would just install ollama on my windows machine, and then just install n8n in a container, then you don;t need cuda, as ollama already has access to gpu, then you can just call your host machine via a ollama LLM model in n8n.
I installed before it works but just headache,
Thank you, but i need it working for a few reasons.
i think i fixed it…
remove everything. (wsl-uninstall, Docker-purge all, uninstall DD, find and delete docker folders…)
install cuda drivers
install DD
if wsl errors, do wsl-uninstall, then install older version
install Ollama, etc…
A bit drastic, but its so much quicker.
Great, you didn’t see this error this time.

If any other issue feel free to reach out.
Best regards,
Samuel