Hey Everyone
Wondering if i can get some help with setting up n8n on my proxmox server.
I want to get it working to access on my network and via the internet. As such i need to figure out how to setup TLS/HTTPS
Im installing via docker.
I am using the following github install with the AI starter kit GitHub - n8n-io/self-hosted-ai-starter-kit: The Self-hosted AI Starter Kit is an open-source template that quickly sets up a local AI environment. Curated by n8n, it provides essential tools for creating secure, self-hosted AI workflows.
All seems fine with installing however once using the install I get this error at the end:
sudo docker compose --profile gpu-nvidia up
[+] Running 10/10
ollama-pull-llama-gpu 8 layers [⣿⣿⣿⣿⣿⣿⣿⣿] 0B/0B Pulled 73.8s
6414378b6477 Pull complete 1.1s
d84c10dcd047 Pull complete 0.7s
4a85dc2f00a0 Pull complete 0.8s
8df458f6a2c6 Pull complete 1.3s
8d8fd8dac143 Pull complete 1.5s
3eb0af8d9bf5 Pull complete 1.7s
bfbabfde94f6 Pull complete 15.9s
746a9d594ec4 Pull complete 18.1s
ollama-gpu Pulled 73.8s
[+] Running 6/6
Container ollama Created 1.4s
Container self-hosted-ai-starter-kit-postgres-1 Running 0.0s
Container qdrant Running 0.0s
Container n8n-import Created 1.4s
Container n8n Running 0.0s
Container ollama-pull-llama Created 0.1s
Attaching to n8n, n8n-import, ollama, ollama-pull-llama, qdrant, self-hosted-ai-starter-kit-postgres-1
Error response from daemon: failed to create task for container: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error running hook #0: error running hook: exit status 1, stdout: , stderr: Auto-detected mode as ‘legacy’
nvidia-container-cli: initialization error: load library failed: libnvidia-ml.so.1: cannot open shared object file: no such file or directory: unknown
So im thinking its possibly around the Olama side of things.
Nvidia container tools are already installed so not sure why its giving that error. But when i did follow these instructions to install the Cuda tools i did get an error about the docker.service
This was the error
sudo systemctl restart docker
Failed to restart docker.service: Unit docker.service not found.
Any help would be great as I want to proof using both AIs via APIs but also a local llm