Below is a modified version of this docker-compose-yml file; so modified because that default one on GitHub is missing environment variables that prevents Postgres from starting.
Anyway, everything comes up fine now (post my fixes).
However, both written and video tutorials I’ve respectively read and watched describing the self-hosted-ai-starter-kit example indicate that when one drags in a n8n Qdrant node, it’s API key and Quadrant URL fields are supposed to auto-populate with the self-hosted values provided in the ./n8n/ sub-directory (in JSON) files. But it doesn’t
I tried everything I can think of, but I cannot get it to auto-populates.
Unrelated to my modifications below (which, again, only pertained to Postgres), I suspect its because the following docker-compose.yml file is problematic. For example, I can’t see where the qdrant: service knows where to pick up hard-coded API key and Quadrant URL fields. And this storage section of the qdrant: service:
volumes:
- qdrant_storage:/qdrant/storage
I believe is only for VectorDB data, not for seeding the API key and Quadrant URL.
Any ideas. Thank you in advance!
EDIT: I should mention that the provided workflow also doesn’t get properly ingested. When I watch the docker logs --follow n8n-import for the n8n-import service, indeed, everything successfully gets copied into the n8n container's/backup/credentials/*.json files; but that’s it. It doesn’t get further integrated from there.
Via trial & error I seemingly got this Quadrant URL to work, but had to enter it manually"
http://host.containers.internal:6333 (See below)
Still, I wonder why it was necessary to enter manually (worrying me that something downstream will fail), versus it auto-populating with Local QdrantAPI Database , as shown in two other tutorials.
Meaning, it should have picked that up as soon as I dragged that node onto the canvas. Hmmm.
Separately, I must also worry about why the included Workflow didn’t get imported. (Same issue, different artifact).
is run in a separate ephemeral container – n8n-import – and not in the actual n8n runtime container; so the above imports are never applied inside the latter; only to another container (n8n-import) that comes & goes. I’m unsure how this docker-compose.yml file ever worked. The n8n-import stanza isn’t necessary at all and I removed it.
In any case, for the moment (because I don’t have time to rework this docker-compose.yaml file) my workaround is to issue the following command once all services are up:
I hope this helps others facing this issue. If I get time to rework the docker-compose.yaml file, and if this post remains open, I’ll post the working version here.
As promised, here is an improved version (with companion start-up CLI commands) that works. Notice that the n8n-import stanza has been completely removed (it wasn’t doing anything useful). Don’t forget to also adjust your /.env file.
# ========================================================================
# Start the "n8n Self-Hosted AI Starter Kit" stack using these two commands.
# The first allows the 'postgres' service time to start before anything else.
# The second imports "Demo workflows" and "Credentials" provided with the kit.
# ========================================================================
# user@fedora$ (cd /home/nmvega/WORKSPACES.d/N8N.SELF.HOSTED.AI.STARTER.KIT.d/ && podman-compose -f ./docker-compose-n8n.yaml up postgres -d && podman-compose --privileged -f ./docker-compose-n8n.yaml --profile cpu up -d)
# ========================================================================
# user@fedora$ podman exec -u node -it n8n /bin/ash -c "/usr/local/bin/n8n import:credentials --separate --input=/backup/credentials; /usr/local/bin/n8n import:workflow --separate --input=/backup/workflows"
# ========================================================================
volumes:
n8n_storage:
postgres_storage:
ollama_storage:
qdrant_storage:
networks:
demo:
x-n8n: &service-n8n
image: n8nio/n8n:latest
networks: ['demo']
environment:
- DB_TYPE=postgresdb
- DB_POSTGRESDB_HOST=postgres
- DB_POSTGRESDB_USER=${POSTGRES_USER}
- DB_POSTGRESDB_PASSWORD=${POSTGRES_PASSWORD}
- N8N_DIAGNOSTICS_ENABLED=false
- N8N_PERSONALIZATION_ENABLED=false
- N8N_ENCRYPTION_KEY=${N8N_ENCRYPTION_KEY}
- N8N_USER_MANAGEMENT_JWT_SECRET=${N8N_USER_MANAGEMENT_JWT_SECRET}
- N8N_ENFORCE_SETTINGS_FILE_PERMISSIONS=${N8N_ENFORCE_SETTINGS_FILE_PERMISSIONS}
- N8N_BASIC_AUTH_USER=${N8N_BASIC_AUTH_USER}
- N8N_BASIC_AUTH_PASSWORD=${N8N_BASIC_AUTH_PASSWORD}
- N8N_SSL_CERT=/home/node/.n8n/certs.d/vscode.cert.pem
- N8N_SSL_KEY=/home/node/.n8n/certs.d/vscode.key.pem
- N8N_PROTOCOL=${N8N_PROTOCOL} # Want 'https' but can't get self-signed SSL certs to work.
- N8N_SECURE_COOKIE=${N8N_SECURE_COOKIE} # Want 'true' but can't get self-signed SSL certs to work.
links:
- postgres
x-ollama: &service-ollama
image: ollama/ollama:latest
container_name: ollama
networks: ['demo']
restart: unless-stopped
ports:
- 11434:11434
volumes:
- ollama_storage:/root/.ollama
x-init-ollama: &init-ollama
image: ollama/ollama:latest
networks: ['demo']
container_name: ollama-pull-llama
volumes:
- ollama_storage:/root/.ollama
entrypoint: /bin/sh
command:
- "-c"
- "sleep 3; OLLAMA_HOST=ollama:11434 ollama pull llama3.2"
services:
postgres:
image: postgres:16-alpine
networks: ['demo']
restart: unless-stopped
environment:
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
- POSTGRES_DB=${POSTGRES_DB}
volumes:
- postgres_storage:/var/lib/postgresql/data
healthcheck:
test: ['CMD-SHELL', 'pg_isready -h localhost -U ${POSTGRES_USER} -d ${POSTGRES_DB}']
interval: 5s
timeout: 5s
retries: 10
n8n:
<<: *service-n8n
container_name: n8n
restart: unless-stopped
ports:
- 5678:5678
volumes:
- n8n_storage:/home/node/.n8n
- type: bind
source: ./n8n/backup/ # Same directory this files is in.
target: /backup/ # Guest directory.
- type: bind
source: ./shared/ # Same directory this files is in.
target: /data/shared/ # Guest directory.
depends_on:
postgres:
condition: service_healthy
environment:
- N8N_ENFORCE_SETTINGS_FILE_PERMISSIONS=${N8N_ENFORCE_SETTINGS_FILE_PERMISSIONS}
- N8N_BASIC_AUTH_USER=${N8N_BASIC_AUTH_USER}
- N8N_BASIC_AUTH_PASSWORD=${N8N_BASIC_AUTH_PASSWORD}
- N8N_SSL_CERT=/home/node/.n8n/certs.d/vscode.cert.pem
- N8N_SSL_KEY=/home/node/.n8n/certs.d/vscode.key.pem
- N8N_PROTOCOL=${N8N_PROTOCOL} # Want 'https' but can't get self-signed SSL certs to work.
- N8N_SECURE_COOKIE=${N8N_SECURE_COOKIE} # Want 'true' but can't get self-signed SSL certs to work.
qdrant:
image: qdrant/qdrant
container_name: qdrant
networks: ['demo']
restart: unless-stopped
ports:
- 6333:6333
volumes:
- qdrant_storage:/qdrant/storage
ollama-cpu:
profiles: ["cpu"]
<<: *service-ollama
ollama-gpu:
profiles: ["gpu-nvidia"]
<<: *service-ollama
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: 1
capabilities: [gpu]
ollama-pull-llama-cpu:
profiles: ["cpu"]
<<: *init-ollama
depends_on:
- ollama-cpu
ollama-pull-llama-gpu:
profiles: ["gpu-nvidia"]
<<: *init-ollama
depends_on:
- ollama-gpu
One more follow-up before this post closes. If you are playing with the Self-Hosted AI Starter Kit – located here --, never alter the contents of the provided .env file. Doing so will cause all sorts of Encryption / Decryption exceptions, as well as cause the n8n Qdrant node and n8n Ollama Embeddings node to not populate correctly with local credentials and containers.
Therefore, keep these unmodified in the ./.env file: