N8n loss of data persistence on local with self-hosted-ai-starter-kit

I’ve watched several videos about n8n on YouTube; I found it interesting and not too difficult to implement.
Two weeks ago, I started by installing Docker Desktop on my Windows PC.
Then I opted for the repository GitHub - reititin/self-hosted-ai-rag: Local AI RAG setup for your markdown notes using Docker, Open WebUI, n8n, Ollama, Qdrant, PostgreSQL., as it contained all the necessary applications in a single installation.
I spent a week trying to solve the problem with data persistence for n8n workflows. When I couldn’t, I decided to try another repository: GitHub - n8n-io/self-hosted-ai-starter-kit: The Self-hosted AI Starter Kit is an open-source template that quickly sets up a local AI environment. Curated by n8n, it provides essential tools for creating secure, self-hosted AI workflows., thinking that maybe the problem was with the first repository.
However, I ran into the same problem with this second repository as well. I’ve tried everything I could, also with the help of some AI models, but I haven’t had any luck. When running docker-compose.yml or the n8n:import container, the data and workflows are lost, and the n8n restarts by default.
This error also appears when I update the n8n.
Has anyone had the same problem and been able to resolve it? Any help is welcome.

you need to map the /home/node/.n8n to local volume mapping in docker

This is my docker-compose.yml: version: ‘3.8’ # Specify the Docker Compose version to ensure compatibility

volumes:
n8n_storage:
postgres_storage:
ollama_storage:
qdrant_storage:

networks:
demo:

x-n8n: &service-n8n
image: n8nio/n8n:latest
networks: [‘demo’]
environment:
- DB_TYPE=postgresdb
- DB_POSTGRESDB_HOST=postgres
- DB_POSTGRESDB_USER=${POSTGRES_USER}
- DB_POSTGRESDB_PASSWORD=${POSTGRES_PASSWORD}
- N8N_DIAGNOSTICS_ENABLED=false
- N8N_PERSONALIZATION_ENABLED=false
- N8N_ENCRYPTION_KEY
- N8N_USER_MANAGEMENT_JWT_SECRET
- OLLAMA_HOST=ollama:11434
env_file:
- .env

x-ollama: &service-ollama
image: ollama/ollama:latest
container_name: ollama
networks: [‘demo’]
restart: unless-stopped
ports:
- 11434:11434
volumes:
- ollama_storage:/root/.ollama

x-init-ollama: &init-ollama
image: ollama/ollama:latest
networks: [‘demo’]
container_name: ollama-pull-llama
volumes:
- ollama_storage:/root/.ollama
entrypoint: /bin/sh
environment:
- OLLAMA_HOST=ollama:11434
command:
- “-c”
- “sleep 3; ollama pull llama3.2”

services:
postgres:
image: postgres:16-alpine
hostname: postgres
networks: [‘demo’]
restart: unless-stopped
environment:
- POSTGRES_USER
- POSTGRES_PASSWORD
- POSTGRES_DB
volumes:
- postgres_storage:/var/lib/postgresql/data
healthcheck:
test: [‘CMD-SHELL’, ‘pg_isready -h localhost -U ${POSTGRES_USER} -d ${POSTGRES_DB}’]
interval: 5s
timeout: 5s
retries: 10

n8n-import:
<<: *service-n8n
hostname: n8n-import
container_name: n8n-import
entrypoint: /bin/sh
command:
- “-c”
- “n8n import:credentials --separate --input=/backup/credentials && n8n import:workflow --separate --input=/backup/workflows”
volumes:
- ./n8n/backup:/backup # Make sure this volume does not overwrite data in n8n_storage
- n8n_storage:/home/node/.n8n # Add this volume to ensure data persistence
depends_on:
postgres:
condition: service_healthy

n8n:
<<: *service-n8n
hostname: n8n
container_name: n8n
restart: unless-stopped
ports:
- 5678:5678
volumes:
- n8n_storage:/home/node/.n8n # Ensure the volume is mounted correctly to persist n8n data
- ./n8n/backup:/backup
- ./shared:/data/shared
depends_on:
postgres:
condition: service_healthy
n8n-import:
condition: service_completed_successfully

qdrant:
image: qdrant/qdrant
hostname: qdrant
container_name: qdrant
networks: [‘demo’]
restart: unless-stopped
ports:
- 6333:6333
volumes:
- qdrant_storage:/qdrant/storage

ollama-cpu:
profiles: [“cpu”]
<<: *service-ollama

ollama-gpu:
profiles: [“gpu-nvidia”]
<<: *service-ollama
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: 1
capabilities: [gpu]

ollama-gpu-amd:
profiles: [“gpu-amd”]
<<: *service-ollama
image: ollama/ollama:rocm
devices:
- “/dev/kfd”
- “/dev/dri”

ollama-pull-llama-cpu:
profiles: [“cpu”]
<<: *init-ollama
depends_on:
- ollama-cpu

ollama-pull-llama-gpu:
profiles: [“gpu-nvidia”]
<<: *init-ollama
depends_on:
- ollama-gpu

ollama-pull-llama-gpu-amd:
profiles: [gpu-amd]
<<: *init-ollama
image: ollama/ollama:rocm
depends_on:
- ollama-gpu-amd

Before cloning this repository into Docker, I tried other repositories and also encountered errors. I believe these errors were due to some error in the Docker installation. So I decided to remove Docker and reinstall it. After uploading the self-hosted-ai-kit repository, I tested data persistence, and yes, everything works correctly this time.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.