Hey , the node local file trigger, how do I get upload doc from my W10 to docker?
It looks like your topic is missing some important information. Could you provide the following if applicable.
- n8n version:
- Database (default: SQLite):
- n8n EXECUTIONS_PROCESS setting (default: own, main):
- Running n8n via (Docker, npm, n8n cloud, desktop app):
- Operating system:
So the local file trigger is meant to watch a folder/file on a server that the n8n instance is running on. I’m not entirely sure what you’re asking but you can use this as a trigger to send documents from your docker(which I’m assuming is running your n8n) to your Windows 10 computer using FTP.
Hey @Raul_Lilloy You can Mount Volume on you n8n Docker instance.
Mounting a volume allows you to share a directory between your Windows machine and the Docker container. Any changes made in this directory will be reflected on both sides.
Here is an example.
version: '3'
services:
my_service:
image: my_image
volumes:
- C:\Users\YourUsername\Documents:/app
Any files you place in C:\Users\YourUsername\Documents
on your Windows machine will be accessible in the /app
directory in the container.
But how can connect my directory in windows with /data/shared/storynotes/context ? or how can create this folder?
Many thanks.
How you run n8n on your W10? Can you show me your docker compose or docker run?
volumes:
n8n_storage:
postgres_storage:
ollama_storage:
qdrant_storage:
open-webui:
flowise:
networks:
demo:
x-n8n: &service-n8n
image: n8nio/n8n:latest
networks: [‘demo’]
environment:
- DB_TYPE=postgresdb
- DB_POSTGRESDB_HOST=postgres
- DB_POSTGRESDB_USER=${POSTGRES_USER}
- DB_POSTGRESDB_PASSWORD=${POSTGRES_PASSWORD}
- N8N_DIAGNOSTICS_ENABLED=false
- N8N_PERSONALIZATION_ENABLED=false
- N8N_ENCRYPTION_KEY
- N8N_USER_MANAGEMENT_JWT_SECRET
- OLLAMA_HOST=ollama:11434
links:
- postgres
x-ollama: &service-ollama
image: ollama/ollama:latest
container_name: ollama
networks: [‘demo’]
restart: unless-stopped
ports:
- 11434:11434
volumes:
- ollama_storage:/root/.ollama
x-init-ollama: &init-ollama
image: ollama/ollama:latest
networks: [‘demo’]
container_name: ollama-pull-llama
volumes:
- ollama_storage:/root/.ollama
entrypoint: /bin/sh
environment:
- OLLAMA_HOST=ollama:11434
command:
- “-c sleep 3; - OLLAMA_HOST=ollama:11434
- ollama pull llama3.1;
- OLLAMA_HOST=ollama:11434
- ollama pull nomic-embed-text”
services:
flowise:
image: flowiseai/flowise
networks: [‘demo’]
restart: unless-stopped
container_name: flowise
environment:
- PORT=3001
ports:
- 3001:3001
extra_hosts:
- “host.docker.internal:host-gateway”
volumes:
- ~/.flowise:/root/.flowise
entrypoint: /bin/sh -c “sleep 3; flowise start”
open-webui:
image: ghcr.io/open-webui/open-webui:main
networks: [‘demo’]
restart: unless-stopped
container_name: open-webui
ports:
- “3000:8080”
extra_hosts:
- “host.docker.internal:host-gateway”
volumes:
- open-webui:/app/backend/data
postgres:
image: postgres:16-alpine
networks: [‘demo’]
restart: unless-stopped
ports:
- 5432:5432
environment:
- POSTGRES_USER
- POSTGRES_PASSWORD
- POSTGRES_DB
volumes:
- postgres_storage:/var/lib/postgresql/data
healthcheck:
test: [‘CMD-SHELL’, ‘pg_isready -h localhost -U ${POSTGRES_USER} -d ${POSTGRES_DB}’]
interval: 5s
timeout: 5s
retries: 10
n8n-import:
<<: *service-n8n
container_name: n8n-import
entrypoint: /bin/sh
command:
- “-c”
- “n8n import:credentials --separate --input=/backup/credentials && n8n import:workflow --separate --input=/backup/workflows”
volumes:
- ./n8n/backup:/backup
depends_on:
postgres:
condition: service_healthy
n8n:
<<: *service-n8n
container_name: n8n
restart: unless-stopped
ports:
- 5678:5678
volumes:
- n8n_storage:/home/node/.n8n
- ./n8n/backup:/backup
- ./shared:/data/shared
depends_on:
postgres:
condition: service_healthy
n8n-import:
condition: service_completed_successfully
qdrant:
image: qdrant/qdrant
container_name: qdrant
networks: [‘demo’]
restart: unless-stopped
ports:
- 6333:6333
volumes:
- qdrant_storage:/qdrant/storage
ollama-cpu:
profiles: [“cpu”]
<<: *service-ollama
ollama-gpu:
profiles: [“gpu-nvidia”]
<<: *service-ollama
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: 1
capabilities: [gpu]
ollama-pull-llama-cpu:
profiles: [“cpu”]
<<: *init-ollama
depends_on:
- ollama-cpu
ollama-pull-llama-gpu:
profiles: [“gpu-nvidia”]
<<: *init-ollama
depends_on:
- ollama-gpu
@Raul_Lilloy Please Check Docker Desktop Settings to be sure what file path to use.
- Open Docker Desktop.
- Click on Settings
(top-right corner).
- Go to General.
- Look for the option “Use the WSL 2 based engine”:
If it is enabled (checked) → You are using WSL2 backend.
If it is disabled (unchecked) → You are using Hyper-V backend.
If You are using Hyper-V backend.
Make sure you:
Enable File Sharing in Docker Desktop
- Open Docker Desktop.
- Go to Settings
→ Resources → File Sharing.
- Click Add and select the Windows folder you want to share.
- Click Apply & Restart.
Then edit this part:
n8n:
services:
n8n:
<<: *service-n8n
container_name: n8n
restart: unless-stopped
ports:
- "5678:5678"
volumes:
- n8n_storage:/home/node/.n8n
- ./n8n/backup:/backup
- ./shared:/data/shared
- "C:/Users/YourUser/folder_you_want_to_share:/data/myfolder" # Add this line
If you are using WSL2 backend:
n8n:
services:
n8n:
<<: *service-n8n
container_name: n8n
restart: unless-stopped
ports:
- "5678:5678"
volumes:
- n8n_storage:/home/node/.n8n
- ./n8n/backup:/backup
- ./shared:/data/shared
- "/mnt/c/Users/YourUser/folder_you_want_to_share:/data/myfolder" # Add this line
Where /data/myfolder
would be the path you can access in n8n instance.
Restart you docker.
Ruslan, questions?
- This is a path in my file pc? “/mnt/c/Users/YourUser/folder_you_want_to_share:/data/myfolder”
- Where is the path in docker n8n?
Raul,
Windows path
(left of colon):/mnt/c/Users/YourUser/folder_you_want_to_share
.
Docker path
(right of colon): /data/myfolder
.
In n8n, watch the last folder.
This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.