services:
n8n:
image: docker.n8n.io/n8nio/n8n
container_name: n8n
ports:
- 127.0.0.1:5678:5678
volumes:
- n8n_data:/home/node/.n8n
- ${HOME}/backup-docker:/home/node/backup
command: >
/bin/sh -c "
while true; do
sleep 86400;
n8n export:workflow --all --output=backup/workflows.json;
n8n export:credentials --all --output=backup/credentials.json
done
"
restart: unless-stopped
volumes:
n8n_data:
name: n8n_data
I have /backup-docker folder auto synced to my AWS s3. I want my n8n container to run this command everyday to backup:
n8n export:workflow --all --output=backup/workflows.json
n8n export:credentials --all --output=backup/credentials.json
After running I got error: command “/bin/sh” not found
I also tried this dockerfile solution by @Miquel_Colomer and using cron but still failed:
Thanks for that, I haven’t got a lot of experience with Docker so wasn’t aware I could do it in that way.
The fix has only partially worked - curl and the npm package get installed, but the command to curl the .sh script and pipe to sh doesn’t seem to have run, even though it runs perfectly fine when done post-boot.
Any idea why that last command to curl the .sh script to /bin/sh failed? Given curl installed correctly and was the first command in the RUN sequence, I don’t see why there was an …
jabbson
September 5, 2025, 4:56pm
2
Hey @yanixah503
Here is my way. I have a separate container running alongside main n8n instance. Here is it’s definition in the compose file:
n8n-backup:
image: n8nio/n8n:latest
user: "0:0"
networks: ['n8n-infra']
volumes:
- n8n_storage:/home/node/.n8n:ro
- ./scripts:/scripts:ro
- ./n8n_backups:/backups
environment:
- DB_TYPE=postgresdb
- DB_POSTGRESDB_HOST=postgres
- DB_POSTGRESDB_USER=${POSTGRES_USER}
- DB_POSTGRESDB_PASSWORD=${POSTGRES_PASSWORD}
- N8N_ENCRYPTION_KEY
- N8N_DIAGNOSTICS_ENABLED=false
- N8N_ENFORCE_SETTINGS_FILE_PERMISSIONS=true
- N8N_BLOCK_ENV_ACCESS_IN_NODE=false
- N8N_RUNNERS_ENABLED=true
entrypoint: ["/bin/sh","-c","/scripts/backup.sh"]
backup script (in .scripts/backup.sh):
#!/bin/sh
set -eu
echo "[backup] booting…"
mkdir -p /backups/workflows /backups/credentials
# Optional: show DB type to verify config
echo "[backup] DB_TYPE=${DB_TYPE:-sqlite}"
while true; do
TS="$(date +%F_%H%M%S)"
echo "[backup] exporting at $TS"
n8n export:workflow --all --output="/backups/workflows/workflows_${TS}.json"
n8n export:credentials --all --output="/backups/credentials/credentials_${TS}.json"
# Retention
find /backups -type f -mtime +14 -delete || true
sleep 21600 # 6h
done
After bringing the container up here are the logs from the container:
$ docker logs 0d99b22b58f5
[backup] booting…
[backup] DB_TYPE=postgresdb
[backup] exporting at 2025-09-05_135053
Successfully exported 94 workflows.
Successfully exported 90 credentials.
Here is the content of the .n8n_backups/:
$ tree n8n_backups/
n8n_backups/
├── credentials
│ └── credentials_2025-09-05_135053.json
└── workflows
└── workflows_2025-09-05_135053.json
2 directories, 2 files
1 Like
I used similar plan to setup like yours before. But decided to not continue because I thought it’s overkill having another instance of N8n just for this purpose. There must be another way.
Currently I’m using N8n workflow for backup things. I know it’s very amateurish.
jabbson
September 5, 2025, 6:24pm
4
Current stats show that this additional container eat up
CPU: 0.00%
MEM: 8MiB
NET_IO: 557kB
BLOCK_ID: 12.3kB
so not too bad.
system
Closed
December 4, 2025, 6:25pm
5
This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.