Setting Up n8n on ProxMox VM Issues

Hey Everyone

Wondering if i can get some help with setting up n8n on my proxmox server.

I want to get it working to access on my network and via the internet. As such i need to figure out how to setup TLS/HTTPS

Im installing via docker.

I am using the following github install with the AI starter kit GitHub - n8n-io/self-hosted-ai-starter-kit: The Self-hosted AI Starter Kit is an open-source template that quickly sets up a local AI environment. Curated by n8n, it provides essential tools for creating secure, self-hosted AI workflows.

All seems fine with installing however once using the install I get this error at the end:

sudo docker compose --profile gpu-nvidia up
[+] Running 10/10
:heavy_check_mark: ollama-pull-llama-gpu 8 layers [⣿⣿⣿⣿⣿⣿⣿⣿] 0B/0B Pulled 73.8s
:heavy_check_mark: 6414378b6477 Pull complete 1.1s
:heavy_check_mark: d84c10dcd047 Pull complete 0.7s
:heavy_check_mark: 4a85dc2f00a0 Pull complete 0.8s
:heavy_check_mark: 8df458f6a2c6 Pull complete 1.3s
:heavy_check_mark: 8d8fd8dac143 Pull complete 1.5s
:heavy_check_mark: 3eb0af8d9bf5 Pull complete 1.7s
:heavy_check_mark: bfbabfde94f6 Pull complete 15.9s
:heavy_check_mark: 746a9d594ec4 Pull complete 18.1s
:heavy_check_mark: ollama-gpu Pulled 73.8s
[+] Running 6/6
:heavy_check_mark: Container ollama Created 1.4s
:heavy_check_mark: Container self-hosted-ai-starter-kit-postgres-1 Running 0.0s
:heavy_check_mark: Container qdrant Running 0.0s
:heavy_check_mark: Container n8n-import Created 1.4s
:heavy_check_mark: Container n8n Running 0.0s
:heavy_check_mark: Container ollama-pull-llama Created 0.1s
Attaching to n8n, n8n-import, ollama, ollama-pull-llama, qdrant, self-hosted-ai-starter-kit-postgres-1
Error response from daemon: failed to create task for container: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error running hook #0: error running hook: exit status 1, stdout: , stderr: Auto-detected mode as ‘legacy’
nvidia-container-cli: initialization error: load library failed: libnvidia-ml.so.1: cannot open shared object file: no such file or directory: unknown

So im thinking its possibly around the Olama side of things.

Nvidia container tools are already installed so not sure why its giving that error. But when i did follow these instructions to install the Cuda tools i did get an error about the docker.service

https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html#installation

This was the error

sudo systemctl restart docker
Failed to restart docker.service: Unit docker.service not found.

Any help would be great as I want to proof using both AIs via APIs but also a local llm

It looks like your topic is missing some important information. Could you provide the following if applicable.

  • n8n version:
  • Database (default: SQLite):
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app):
  • Operating system:

So looks like it was an issue with the Ubuntu Snap Docker install.

Completely uninstalled the snap docker instance and reinstalled docker manually which seems to have resolved it.

Now just figuring out how to sort out https protocol on the 5678 port

1 Like

Use a reverse proxy like Caddy or Traefik or a tunnel service like ngrok or cloudflared, it will save some time and make it a lot easier.

How about using Certbot and nginx? Seen someone setup n8n on its own using that for https. However cant figure out how to edit the docker-compose.yml with this part:

-e N8N_HOST="your-domain.com" \
-e WEBHOOK_TUNNEL_URL="https://your-domain.com/" \
-e WEBHOOK_URL="https://your-domain.com/" \

TBH, I never did get Caddy or Traefik to work correctly under Docker.

So here is an alternate that works great for me.

I ordered a little industrial computer with 6 network ports for around $200. Installed PFSense on it and use it as my internet router. I have HAProxy on it that handles the SSL offloading and communications to the Docker services. It also uses Acme Certificates and automatically renews the SSL certs when they expire.

The docker services are exposed as ports on the docker swarm and HAProxy connects to the port for the given service.

Hmmm ye im hoping its an easy setup in the docker-compose.yml file as there are some parts that it looks like I could edit to include it but not sure on the correct way of writing it

Nginx will work fine, just make sure you enable websocket support. Also remove the tunnel url env option.

If you want to use Traefik or Caddy instead which both handle certs for you most of our install guides have compose files that will set it up, generally all you need to do is set an email, forward port 80 and 443 to that container and everything just works.

Do you have a link to one with Traefik? If there is a compose file already I can then edit the one that comes with the AI Starter kit

Hey @PattrnData

This one from our docs should help: Docker Compose | n8n Docs

I have the same exact setup for my n8n instance. I am using Nginx Proxy Manager as reverse proxy installed on LXC to handle TLS/HTTPS certificates. You can install NPM from this resource Proxmox VE Helper-Scripts or manually.