Can´t run the Self-hosted AI starter kit

Describe the problem/error/question

I can´t run the Self-hosted AI starter kit (https://github.com/n8n-io/self-hosted-ai-starter-kit).

What is the error message (if any)?

$ docker compose down && docker system prune && sudo systemctl restart docker.service && docker compose create --force-recreate && docker compose --profile gpu-nvidia up
[+] Running 5/5
 ✔ Container qdrant                                 Removed                                                                                              0.2s 
 ✔ Container n8n                                    Removed                                                                                              0.0s 
 ✔ Container n8n-import                             Removed                                                                                              0.0s 
 ✔ Container self-hosted-ai-starter-kit-postgres-1  Removed                                                                                              0.2s 
 ✔ Network self-hosted-ai-starter-kit_demo          Removed                                                                                              0.2s 
WARNING! This will remove:
  - all stopped containers
  - all networks not used by at least one container
  - all dangling images
  - unused build cache

Are you sure you want to continue? [y/N] y
Deleted Containers:
aa1bab6952cf60f9e86edc092943e934974b1a588d1a4c77a709257799587326
5e1be8f3367bd48d003b78e52ad450abba00e53c9f887c8f622dc0bd307146c5

Total reclaimed space: 0B
[+] Creating 5/3
 ✔ Network self-hosted-ai-starter-kit_demo          Created                                                                                              0.1s 
 ✔ Container self-hosted-ai-starter-kit-postgres-1  Created                                                                                              0.1s 
 ✔ Container qdrant                                 Created                                                                                              0.1s 
 ✔ Container n8n-import                             Created                                                                                              0.0s 
 ✔ Container n8n                                    Created                                                                                              0.0s 
[+] Running 2/0
 ✔ Container ollama             Created                                                                                                                  0.0s 
 ✔ Container ollama-pull-llama  Created                                                                                                                  0.0s 
Attaching to n8n, n8n-import, ollama, ollama-pull-llama, qdrant, postgres-1
qdrant             |            _                 _    
qdrant             |   __ _  __| |_ __ __ _ _ __ | |_  
qdrant             |  / _` |/ _` | '__/ _` | '_ \| __| 
qdrant             | | (_| | (_| | | | (_| | | | | |_  
qdrant             |  \__, |\__,_|_|  \__,_|_| |_|\__| 
qdrant             |     |_|                           
qdrant             | 
qdrant             | Version: 1.12.5, build: 27260abd
qdrant             | Access web UI at http://localhost:6333/dashboard
qdrant             | 
qdrant             | 2025-02-02T14:29:19.593991Z  INFO storage::content_manager::consensus::persistent: Loading raft state from ./storage/raft_state.json    
qdrant             | 2025-02-02T14:29:19.596309Z  INFO qdrant: Distributed mode disabled    
qdrant             | 2025-02-02T14:29:19.596328Z  INFO qdrant: Telemetry reporting enabled, id: 998b8c8b-8122-4c3f-a683-ebd6060db647    
qdrant             | 2025-02-02T14:29:19.596358Z  INFO qdrant: Inference service is not configured.    
qdrant             | 2025-02-02T14:29:19.598438Z  INFO qdrant::actix: TLS disabled for REST API    
qdrant             | 2025-02-02T14:29:19.598476Z  INFO qdrant::actix: Qdrant HTTP listening on 6333    
qdrant             | 2025-02-02T14:29:19.598484Z  INFO actix_server::builder: Starting 15 workers
qdrant             | 2025-02-02T14:29:19.598492Z  INFO actix_server::server: Actix runtime found; starting in Actix runtime
qdrant             | 2025-02-02T14:29:19.601281Z  INFO qdrant::tonic: Qdrant gRPC listening on 6334    
qdrant             | 2025-02-02T14:29:19.601286Z  INFO qdrant::tonic: TLS disabled for gRPC API    
postgres-1         | 
postgres-1         | PostgreSQL Database directory appears to contain a database; Skipping initialization
postgres-1         | 
postgres-1         | 2025-02-02 14:29:19.654 UTC [1] LOG:  starting PostgreSQL 16.6 on x86_64-pc-linux-musl, compiled by gcc (Alpine 14.2.0) 14.2.0, 64-bit
postgres-1         | 2025-02-02 14:29:19.654 UTC [1] LOG:  listening on IPv4 address "0.0.0.0", port 5432
postgres-1         | 2025-02-02 14:29:19.654 UTC [1] LOG:  listening on IPv6 address "::", port 5432
postgres-1         | 2025-02-02 14:29:19.658 UTC [1] LOG:  listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
postgres-1         | 2025-02-02 14:29:19.664 UTC [29] LOG:  database system was shut down at 2025-02-02 14:29:16 UTC
postgres-1         | 2025-02-02 14:29:19.669 UTC [1] LOG:  database system is ready to accept connections
ollama             | 2025/02/02 14:29:19 routes.go:1259: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
ollama             | time=2025-02-02T14:29:19.879Z level=INFO source=images.go:757 msg="total blobs: 4"
ollama             | time=2025-02-02T14:29:19.879Z level=INFO source=images.go:764 msg="total unused blobs removed: 0"
ollama             | [GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached.
ollama             | 
ollama             | [GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production.
ollama             |  - using env:      export GIN_MODE=release
ollama             |  - using code:     gin.SetMode(gin.ReleaseMode)
ollama             | 
ollama             | [GIN-debug] POST   /api/pull                 --> github.com/ollama/ollama/server.(*Server).PullHandler-fm (5 handlers)
ollama             | [GIN-debug] POST   /api/generate             --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (5 handlers)
ollama             | [GIN-debug] POST   /api/chat                 --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (5 handlers)
ollama             | [GIN-debug] POST   /api/embed                --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (5 handlers)
ollama             | [GIN-debug] POST   /api/embeddings           --> github.com/ollama/ollama/server.(*Server).EmbeddingsHandler-fm (5 handlers)
ollama             | [GIN-debug] POST   /api/create               --> github.com/ollama/ollama/server.(*Server).CreateHandler-fm (5 handlers)
ollama             | [GIN-debug] POST   /api/push                 --> github.com/ollama/ollama/server.(*Server).PushHandler-fm (5 handlers)
ollama             | [GIN-debug] POST   /api/copy                 --> github.com/ollama/ollama/server.(*Server).CopyHandler-fm (5 handlers)
ollama             | [GIN-debug] DELETE /api/delete               --> github.com/ollama/ollama/server.(*Server).DeleteHandler-fm (5 handlers)
ollama             | [GIN-debug] POST   /api/show                 --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (5 handlers)
ollama             | [GIN-debug] POST   /api/blobs/:digest        --> github.com/ollama/ollama/server.(*Server).CreateBlobHandler-fm (5 handlers)
ollama             | [GIN-debug] HEAD   /api/blobs/:digest        --> github.com/ollama/ollama/server.(*Server).HeadBlobHandler-fm (5 handlers)
ollama             | [GIN-debug] GET    /api/ps                   --> github.com/ollama/ollama/server.(*Server).PsHandler-fm (5 handlers)
ollama             | [GIN-debug] POST   /v1/chat/completions      --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (6 handlers)
ollama             | [GIN-debug] POST   /v1/completions           --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (6 handlers)
ollama             | [GIN-debug] POST   /v1/embeddings            --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (6 handlers)
ollama             | [GIN-debug] GET    /v1/models                --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (6 handlers)
ollama             | [GIN-debug] GET    /v1/models/:model         --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (6 handlers)
ollama             | [GIN-debug] GET    /                         --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers)
ollama             | [GIN-debug] GET    /api/tags                 --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers)
ollama             | [GIN-debug] GET    /api/version              --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers)
ollama             | [GIN-debug] HEAD   /                         --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers)
ollama             | [GIN-debug] HEAD   /api/tags                 --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers)
ollama             | [GIN-debug] HEAD   /api/version              --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers)
ollama             | time=2025-02-02T14:29:19.880Z level=INFO source=routes.go:1310 msg="Listening on [::]:11434 (version 0.5.4-0-g2ddc32d-dirty)"
ollama             | time=2025-02-02T14:29:19.881Z level=INFO source=routes.go:1339 msg="Dynamic LLM libraries" runners="[cuda_v11_avx cuda_v12_avx cpu cpu_avx cpu_avx2]"
ollama             | time=2025-02-02T14:29:19.882Z level=INFO source=gpu.go:226 msg="looking for compatible GPUs"
ollama             | time=2025-02-02T14:29:19.887Z level=WARN source=gpu.go:624 msg="unknown error initializing cuda driver library /usr/lib/x86_64-linux-gnu/libcuda.so.470.256.02: cuda driver library init failure: 999. see https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md for more information"
ollama             | time=2025-02-02T14:29:19.896Z level=INFO source=gpu.go:392 msg="no compatible GPUs were discovered"
ollama             | time=2025-02-02T14:29:19.896Z level=INFO source=types.go:131 msg="inference compute" id=0 library=cpu variant=avx2 compute="" driver=0.0 name="" total="62.7 GiB" available="51.3 GiB"
ollama             | [GIN] 2025/02/02 - 14:29:22 | 200 |    1.118726ms |      172.18.0.5 | HEAD     "/"
ollama             | time=2025-02-02T14:29:23.758Z level=INFO source=download.go:175 msg="downloading e2f46f5b501c in 16 402 MB part(s)"
pulling manifest 
n8n-import         | node:internal/modules/package_json_reader:93
n8n-import         |         throw error;
n8n-import         |         ^
n8n-import         | 
n8n-import         | SyntaxError: Error parsing /usr/local/lib/node_modules/n8n/package.json: Unexpected end of JSON input
n8n-import         |     at parse (<anonymous>)
n8n-import         |     at read (node:internal/modules/package_json_reader:80:16)
n8n-import         |     at readPackage (node:internal/modules/package_json_reader:141:10)
n8n-import         |     at readPackageScope (node:internal/modules/package_json_reader:164:19)
n8n-import         |     at shouldUseESMLoader (node:internal/modules/run_main:81:15)
n8n-import         |     at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:161:24)
n8n-import         |     at node:internal/main/run_main_module:28:49 {
n8n-import         |   path: '/usr/local/lib/node_modules/n8n/package.json'
n8n-import         | }
n8n-import         | 
pulling manifest 
pulling manifest 
Gracefully stopping... (press Ctrl+C again to force)
service "n8n-import" didn't complete successfully: exit 1

The suspicious file is empty indeed:

$ docker run -it --entrypoint sh n8nio/n8n
~ $ cat /usr/local/lib/node_modules/n8n/package.json 
~ $

Information on your n8n setup

  • n8n version: don´t know
  • Database (default: SQLite): don´t know
  • n8n EXECUTIONS_PROCESS setting (default: own, main): don´t know
  • Running n8n via (Docker, npm, n8n cloud, desktop app): docker (sse previous log for details)
  • Operating system: Ubuntu 24.04.1 LTS

Any help would be really appreciated.

Hey @am75

Not sure if this is helpful but the “n8n-import” uses the same image as the “n8n” container - basically the same thing but “n8n-import”'s only task is to import your on-disk backups into the database.

Perhaps downloading the n8n image again and booting it up separately might help?

docker pull n8nio/n8n:latest

# note: the container image name might vary slightly, use `docker ps` to check
docker compose up -d n8n-import

# then perhaps ssh into the container and check it out?
docker exec -it n8n-import /bin/sh
cat /usr/local/lib/node_modules/n8n/package.json 

It works!
Thank you @Jim_Le for you help :pray: